Haddadi, Yasser; Bahrami, Golnosh; Isidor, Flemming
To compare operating time and patient perception of conventional impression (CI) taking and intraoral scanning (IOS) for manufacture of a tooth-supported crown. A total of 19 patients needing indirect full-coverage restorations fitting the requirements for a split-mouth design were recruited. Each patient received two lithium disilicate crowns, one manufactured from CI taking and one from IOS. Both teeth were prepared following the manufacturers' recommendations. For both impression techniques, two retraction cords soaked in 15% ferric sulphate were used for tissue management. CIs were taken in a full-arch metallic tray using one-step, two-viscosity technique with polyvinyl siloxane silicone. The operating time for each step of the two impression methods was registered. Patient perception associated with each method was scored using a 100-mm visual analog scale (VAS), with 100 indicating maximum discomfort. Median total operating time for CI taking was 15:47 minutes (interquartile range [IQR] 15:18 to 17:30), and for IOS was 5:05 minutes (IQR 4:35 to 5:23). The median VAS score for patient perception was 73 (IQR 16 to 89) for CI taking and 6 (IQR 2 to 9) for IOS. The differences between the two groups were statistically significant (P < .05) for both parameters. IOS was less time consuming than CI taking, and patient perception was in favor of IOS.
Single image super-resolution reconstruction algorithm based on eage selection
NASA Astrophysics Data System (ADS)
Zhang, Yaolan; Liu, Yijun
2017-05-01
Super-resolution (SR) has become more important, because it can generate high-quality high-resolution (HR) images from low-resolution (LR) input images. At present, there are a lot of work is concentrated on developing sophisticated image priors to improve the image quality, while taking much less attention to estimating and incorporating the blur model that can also impact the reconstruction results. We present a new reconstruction method based on eager selection. This method takes full account of the factors that affect the blur kernel estimation and accurately estimating the blur process. When comparing with the state-of-the-art methods, our method has comparable performance.
NASA Technical Reports Server (NTRS)
Pappa, Richard S. (Technical Monitor); Black, Jonathan T.
2003-01-01
This report discusses the development and application of metrology methods called photogrammetry and videogrammetry that make accurate measurements from photographs. These methods have been adapted for the static and dynamic characterization of gossamer structures, as four specific solar sail applications demonstrate. The applications prove that high-resolution, full-field, non-contact static measurements of solar sails using dot projection photogrammetry are possible as well as full-field, non-contact, dynamic characterization using dot projection videogrammetry. The accuracy of the measurement of the resonant frequencies and operating deflection shapes that were extracted surpassed expectations. While other non-contact measurement methods exist, they are not full-field and require significantly more time to take data.
A practical approach for inexpensive searches of radiology report databases.
Desjardins, Benoit; Hamilton, R Curtis
2007-06-01
We present a method to perform full text searches of radiology reports for the large number of departments that do not have this ability as part of their radiology or hospital information system. A tool written in Microsoft Access (front-end) has been designed to search a server (back-end) containing the indexed backup weekly copy of the full relational database extracted from a radiology information system (RIS). This front end-/back-end approach has been implemented in a large academic radiology department, and is used for teaching, research and administrative purposes. The weekly second backup of the 80 GB, 4 million record RIS database takes 2 hours. Further indexing of the exported radiology reports takes 6 hours. Individual searches of the indexed database typically take less than 1 minute on the indexed database and 30-60 minutes on the nonindexed database. Guidelines to properly address privacy and institutional review board issues are closely followed by all users. This method has potential to improve teaching, research, and administrative programs within radiology departments that cannot afford more expensive technology.
Assessing fullness of asthma patients' aerosol inhalers.
Rickenbach, M A; Julious, S A
1994-07-01
The importance of regular medication in order to control asthma symptoms is recognized. However, there is no accurate mechanism for assessing the fullness of aerosol inhalers. The contribution to asthma morbidity of unexpectedly running out of inhaled medication is unknown. A study was undertaken to determine how patients assess inhaler fullness and the accuracy of their assessments, and to evaluate the floatation method of assessing inhaler fullness. An interview survey of 98 patients (51% of those invited to take part), using 289 inhalers, was completed at one general practice in Hampshire. One third of participants said they had difficulty assessing aerosol inhaler fullness and those aged 60 years and over were found to be more inaccurate in assessing fullness than younger participants. Shaking the inhaler to feel the contents move was the commonest method of assessment. When placed in water, an inhaler canister floating on its side with a corner of the canister valve exposed to air indicates that the canister is less than 15% full (sensitivity 90%, specificity 99%). Floating a canister in water provides an objective measurement of aerosol inhaler fullness. Providing the method is recommended by the aerosol inhaler manufacturer, general practitioners should demonstrate the floatation method to patients experiencing difficulty in assessing inhaler fullness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavanello, Michele; Van Voorhis, Troy; Visscher, Lucas
2013-02-07
Quantum-mechanical methods that are both computationally fast and accurate are not yet available for electronic excitations having charge transfer character. In this work, we present a significant step forward towards this goal for those charge transfer excitations that take place between non-covalently bound molecules. In particular, we present a method that scales linearly with the number of non-covalently bound molecules in the system and is based on a two-pronged approach: The molecular electronic structure of broken-symmetry charge-localized states is obtained with the frozen density embedding formulation of subsystem density-functional theory; subsequently, in a post-SCF calculation, the full-electron Hamiltonian and overlapmore » matrix elements among the charge-localized states are evaluated with an algorithm which takes full advantage of the subsystem DFT density partitioning technique. The method is benchmarked against coupled-cluster calculations and achieves chemical accuracy for the systems considered for intermolecular separations ranging from hydrogen-bond distances to tens of Angstroms. Numerical examples are provided for molecular clusters comprised of up to 56 non-covalently bound molecules.« less
Pavanello, Michele; Van Voorhis, Troy; Visscher, Lucas; Neugebauer, Johannes
2013-02-07
Quantum-mechanical methods that are both computationally fast and accurate are not yet available for electronic excitations having charge transfer character. In this work, we present a significant step forward towards this goal for those charge transfer excitations that take place between non-covalently bound molecules. In particular, we present a method that scales linearly with the number of non-covalently bound molecules in the system and is based on a two-pronged approach: The molecular electronic structure of broken-symmetry charge-localized states is obtained with the frozen density embedding formulation of subsystem density-functional theory; subsequently, in a post-SCF calculation, the full-electron Hamiltonian and overlap matrix elements among the charge-localized states are evaluated with an algorithm which takes full advantage of the subsystem DFT density partitioning technique. The method is benchmarked against coupled-cluster calculations and achieves chemical accuracy for the systems considered for intermolecular separations ranging from hydrogen-bond distances to tens of Ångstroms. Numerical examples are provided for molecular clusters comprised of up to 56 non-covalently bound molecules.
Assessing fullness of asthma patients' aerosol inhalers.
Rickenbach, M A; Julious, S A
1994-01-01
BACKGROUND. The importance of regular medication in order to control asthma symptoms is recognized. However, there is no accurate mechanism for assessing the fullness of aerosol inhalers. The contribution to asthma morbidity of unexpectedly running out of inhaled medication is unknown. AIM. A study was undertaken to determine how patients assess inhaler fullness and the accuracy of their assessments, and to evaluate the floatation method of assessing inhaler fullness. METHOD. An interview survey of 98 patients (51% of those invited to take part), using 289 inhalers, was completed at one general practice in Hampshire. RESULTS. One third of participants said they had difficulty assessing aerosol inhaler fullness and those aged 60 years and over were found to be more inaccurate in assessing fullness than younger participants. Shaking the inhaler to feel the contents move was the commonest method of assessment. When placed in water, an inhaler canister floating on its side with a corner of the canister valve exposed to air indicates that the canister is less than 15% full (sensitivity 90%, specificity 99%). CONCLUSION. Floating a canister in water provides an objective measurement of aerosol inhaler fullness. Providing the method is recommended by the aerosol inhaler manufacturer, general practitioners should demonstrate the floatation method to patients experiencing difficulty in assessing inhaler fullness. PMID:7619099
New method of extrapolation of the resistance of a model planing boat to full size
NASA Technical Reports Server (NTRS)
Sottorf, W
1942-01-01
The previously employed method of extrapolating the total resistance to full size with lambda(exp 3) (model scale) and thereby foregoing a separate appraisal of the frictional resistance, was permissible for large models and floats of normal size. But faced with the ever increasing size of aircraft a reexamination of the problem of extrapolation to full size is called for. A method is described by means of which, on the basis of an analysis of tests on planing surfaces, the variation of the wetted surface over the take-off range is analytically obtained. The friction coefficients are read from Prandtl's curve for turbulent boundary layer with laminar approach. With these two values a correction for friction is obtainable.
Effects of Earth's curvature in full-wave modeling of VLF propagation
NASA Astrophysics Data System (ADS)
Qiu, L.; Lehtinen, N. G.; Inan, U. S.; Stanford VLF Group
2011-12-01
We show how to include curvature in the full-wave finite element approach to calculate ELF/VLF wave propagation in horizontally stratified earth-ionosphere waveguide. A general curvilinear stratified system is considered, and the numerical solutions of full-wave method in curvilinear system are compared with the analytic solutions in the cylindrical and spherical waveguides filled with an isotropic medium. We calculate the attenuation and height gain for modes in the Earth-ionosphere waveguide, taking into account the anisotropicity of ionospheric plasma, for different assumptions about the Earth's curvature, and quantify the corrections due to the curvature. The results are compared with the results of previous models, such as LWPC, as well as with ground and satellite observations, and show improved accuracy compared with full-wave method without including the curvature effect.
NASA Astrophysics Data System (ADS)
Luo, Yao; Wu, Mei-Ping; Wang, Ping; Duan, Shu-Ling; Liu, Hao-Jun; Wang, Jin-Long; An, Zhan-Feng
2015-09-01
The full magnetic gradient tensor (MGT) refers to the spatial change rate of the three field components of the geomagnetic field vector along three mutually orthogonal axes. The tensor is of use to geological mapping, resources exploration, magnetic navigation, and others. However, it is very difficult to measure the full magnetic tensor gradient using existing engineering technology. We present a method to use triaxial aeromagnetic gradient measurements for deriving the full MGT. The method uses the triaxial gradient data and makes full use of the variation of the magnetic anomaly modulus in three dimensions to obtain a self-consistent magnetic tensor gradient. Numerical simulations show that the full MGT data obtained with the proposed method are of high precision and satisfy the requirements of data processing. We selected triaxial aeromagnetic gradient data from the Hebei Province for calculating the full MGT. Data processing shows that using triaxial tensor gradient data allows to take advantage of the spatial rate of change of the total field in three dimensions and suppresses part of the independent noise in the aeromagnetic gradient. The calculated tensor components have improved resolution, and the transformed full tensor gradient satisfies the requirement of geological mapping and interpretation.
Political incentives towards replacing animal testing in nanotechnology?
Sauer, Ursula G
2009-01-01
The Treaty of Lisbon requests the European Union and the Member States to pay full regard to animal welfare issues when implementing new policies. The present article discusses how these provisions are met in the emerging area of nanotechnology. Political action plans in Europe take into account animal welfare issues to some extent. Funding programmes promote the development of non-animal test methods, however only in the area of nanotoxicology and also here not sufficiently to "pay full regard" to preventing animal testing, let alone to bring about a paradigm change in toxicology or in biomedical research as such. Ethical deliberations on nanotechnology, which influence future policies, so far do not address animal welfare at all. Considering that risk assessment of nanoproducts is conceived as a key element to protect human dignity, ethical deliberations should address the choice of the underlying testing methods and call for basing nanomaterial safety testing upon the latest scientific--and ethically acceptable--technologies. Finally, public involvement in the debate on nanotechnology should take into account information on resulting animal experiments.
Oostendorp, Rob A. B.; Elvers, Hans; Mikołajewska, Emilia; Laekeman, Marjan; van Trijffel, Emiel; Samwel, Han; Duquet, William
2015-01-01
Objective. To develop and evaluate process indicators relevant to biopsychosocial history taking in patients with chronic back and neck pain. Methods. The SCEBS method, covering the Somatic, Psychological (Cognition, Emotion, and Behavior), and Social dimensions of chronic pain, was used to evaluate biopsychosocial history taking by manual physical therapists (MPTs). In Phase I, process indicators were developed while in Phase II indicators were tested in practice. Results. Literature-based recommendations were transformed into 51 process indicators. Twenty MTPs contributed 108 patient audio recordings. History taking was excellent (98.3%) for the Somatic dimension, very inadequate for Cognition (43.1%) and Behavior (38.3%), weak (27.8%) for Emotion, and low (18.2%) for the Social dimension. MTPs estimated their coverage of the Somatic dimension as excellent (100%), as adequate for Cognition, Emotion, and Behavior (60.1%), and as very inadequate for the Social dimension (39.8%). Conclusion. MTPs perform screening for musculoskeletal pain mainly through the use of somatic dimension of (chronic) pain. Psychological and social dimensions of chronic pain were inadequately covered by MPTs. Furthermore, a substantial discrepancy between actual and self-estimated use of biopsychosocial history taking was noted. We strongly recommend full implementation of the SCEBS method in educational programs in manual physical therapy. PMID:25945358
Two-Step Formal Advertisement: An Examination.
1976-10-01
The purpose of this report is to examine the potential application of the Two-Step Formal Advertisement method of procurement. Emphasis is placed on...Step formal advertising is a method of procurement designed to take advantage of negotiation flexibility and at the same time obtain the benefits of...formal advertising . It is used where the specifications are not sufficiently definite or may be too restrictive to permit full and free competition
NASA Astrophysics Data System (ADS)
Sboev, A. G.; Ilyashenko, A. S.; Vetrova, O. A.
1997-02-01
The method of bucking evaluation, realized in the MOnte Carlo code MCS, is described. This method was applied for calculational analysis of well known light water experiments TRX-1 and TRX-2. The analysis of this comparison shows, that there is no coincidence between Monte Carlo calculations, obtained by different ways: the MCS calculations with given experimental bucklings; the MCS calculations with given bucklings evaluated on base of full core MCS direct simulations; the full core MCNP and MCS direct simulations; the MCNP and MCS calculations, where the results of cell calculations are corrected by the coefficients taking into the account the leakage from the core. Also the buckling values evaluated by full core MCS calculations have differed from experimental ones, especially in the case of TRX-1, when this difference has corresponded to 0.5 percent increase of Keff value.
NASA Astrophysics Data System (ADS)
Dai, Meng-Xue; Chen, Jing-Bo; Cao, Jian
2017-07-01
Full-waveform inversion (FWI) is an ill-posed optimization problem which is sensitive to noise and initial model. To alleviate the ill-posedness of the problem, regularization techniques are usually adopted. The ℓ1-norm penalty is a robust regularization method that preserves contrasts and edges. The Orthant-Wise Limited-Memory Quasi-Newton (OWL-QN) method extends the widely-used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to the ℓ1-regularized optimization problems and inherits the efficiency of L-BFGS. To take advantage of the ℓ1-regularized method and the prior model information obtained from sonic logs and geological information, we implement OWL-QN algorithm in ℓ1-regularized FWI with prior model information in this paper. Numerical experiments show that this method not only improve the inversion results but also has a strong anti-noise ability.
Multi-dimension feature fusion for action recognition
NASA Astrophysics Data System (ADS)
Dong, Pei; Li, Jie; Dong, Junyu; Qi, Lin
2018-04-01
Typical human actions last several seconds and exhibit characteristic spatio-temporal structure. The challenge for action recognition is to capture and fuse the multi-dimension information in video data. In order to take into account these characteristics simultaneously, we present a novel method that fuses multiple dimensional features, such as chromatic images, depth and optical flow fields. We built our model based on the multi-stream deep convolutional networks with the help of temporal segment networks and extract discriminative spatial and temporal features by fusing ConvNets towers multi-dimension, in which different feature weights are assigned in order to take full advantage of this multi-dimension information. Our architecture is trained and evaluated on the currently largest and most challenging benchmark NTU RGB-D dataset. The experiments demonstrate that the performance of our method outperforms the state-of-the-art methods.
Multi-Level Bitmap Indexes for Flash Memory Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Kesheng; Madduri, Kamesh; Canon, Shane
2010-07-23
Due to their low access latency, high read speed, and power-efficient operation, flash memory storage devices are rapidly emerging as an attractive alternative to traditional magnetic storage devices. However, tests show that the most efficient indexing methods are not able to take advantage of the flash memory storage devices. In this paper, we present a set of multi-level bitmap indexes that can effectively take advantage of flash storage devices. These indexing methods use coarsely binned indexes to answer queries approximately, and then use finely binned indexes to refine the answers. Our new methods read significantly lower volumes of data atmore » the expense of an increased disk access count, thus taking full advantage of the improved read speed and low access latency of flash devices. To demonstrate the advantage of these new indexes, we measure their performance on a number of storage systems using a standard data warehousing benchmark called the Set Query Benchmark. We observe that multi-level strategies on flash drives are up to 3 times faster than traditional indexing strategies on magnetic disk drives.« less
Mixed Methods Research: Taking a Broader View
ERIC Educational Resources Information Center
Stewart, Tricia J.; Palermo-Biggs, Michelle
2013-01-01
For school districts, the increasing importance of using data for continuous improvement has become part of the educational landscape under accountability. In many ways, educators have become inundated with data but not always in ways that provide them with a full picture to adequately weigh decisions for their specific context. One way to use…
NASA Astrophysics Data System (ADS)
Qi, Youzheng; Huang, Ling; Wu, Xin; Zhu, Wanhua; Fang, Guangyou; Yu, Gang
2017-07-01
Quantitative modeling of the transient electromagnetic (TEM) response requires consideration of the full transmitter waveform, i.e., not only the specific current waveform in a half cycle but also the bipolar repetition. In this paper, we present a novel temporal interpolation and convolution (TIC) method to facilitate the accurate TEM modeling. We first calculate the temporal basis response on a logarithmic scale using the fast digital-filter-based methods. Then, we introduce a function named hamlogsinc in the framework of discrete signal processing theory to reconstruct the basis function and to make the convolution with the positive half of the waveform. Finally, a superposition procedure is used to take account of the effect of previous bipolar waveforms. Comparisons with the established fast Fourier transform method demonstrate that our TIC method can get the same accuracy with a shorter computing time.
NASA Astrophysics Data System (ADS)
Zhang, Yi; Wu, Yulong; Yan, Jianguo; Wang, Haoran; Rodriguez, J. Alexis P.; Qiu, Yue
2018-04-01
In this paper, we propose an inverse method for full gravity gradient tensor data in the spherical coordinate system. As opposed to the traditional gravity inversion in the Cartesian coordinate system, our proposed method takes the curvature of the Earth, the Moon, or other planets into account, using tesseroid bodies to produce gravity gradient effects in forward modeling. We used both synthetic and observed datasets to test the stability and validity of the proposed method. Our results using synthetic gravity data show that our new method predicts the depth of the density anomalous body efficiently and accurately. Using observed gravity data for the Mare Smythii area on the moon, the density distribution of the crust in this area reveals its geological structure. These results validate the proposed method and potential application for large area data inversion of planetary geological structures.[Figure not available: see fulltext.
A Bayes linear Bayes method for estimation of correlated event rates.
Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim
2013-12-01
Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.
Pan, Yijie; Wang, Yongtian; Liu, Juan; Li, Xin; Jia, Jia
2014-03-01
Previous research [Appl. Opt.52, A290 (2013)] has revealed that Fourier analysis of three-dimensional affine transformation theory can be used to improve the computation speed of the traditional polygon-based method. In this paper, we continue our research and propose an improved full analytical polygon-based method developed upon this theory. Vertex vectors of primitive and arbitrary triangles and the pseudo-inverse matrix were used to obtain an affine transformation matrix representing the spatial relationship between the two triangles. With this relationship and the primitive spectrum, we analytically obtained the spectrum of the arbitrary triangle. This algorithm discards low-level angular dependent computations. In order to add diffusive reflection to each arbitrary surface, we also propose a whole matrix computation approach that takes advantage of the affine transformation matrix and uses matrix multiplication to calculate shifting parameters of similar sub-polygons. The proposed method improves hologram computation speed for the conventional full analytical approach. Optical experimental results are demonstrated which prove that the proposed method can effectively reconstruct three-dimensional scenes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curchod, Basile F. E.; Martínez, Todd J., E-mail: toddjmartinez@gmail.com; SLAC National Accelerator Laboratory, Menlo Park, California 94025
2016-03-14
Full multiple spawning is a formally exact method to describe the excited-state dynamics of molecular systems beyond the Born-Oppenheimer approximation. However, it has been limited until now to the description of radiationless transitions taking place between electronic states with the same spin multiplicity. This Communication presents a generalization of the full and ab initio multiple spawning methods to both internal conversion (mediated by nonadiabatic coupling terms) and intersystem crossing events (triggered by spin-orbit coupling matrix elements) based on a spin-diabatic representation. The results of two numerical applications, a model system and the deactivation of thioformaldehyde, validate the presented formalism andmore » its implementation.« less
Curchod, Basile F. E.; Rauer, Clemens; Marquetand, Philipp; ...
2016-03-11
Full Multiple Spawning is a formally exact method to describe the excited-state dynamics of molecular systems beyond the Born-Oppenheimer approximation. However, it has been limited until now to the description of radiationless transitions taking place between electronic states with the same spin multiplicity. This Communication presents a generalization of the full and ab initio Multiple Spawning methods to both internal conversion (mediated by nonadiabatic coupling terms) and intersystem crossing events (triggered by spin-orbit coupling matrix elements) based on a spin-diabatic representation. Lastly, the results of two numerical applications, a model system and the deactivation of thioformaldehyde, validate the presented formalismmore » and its implementation.« less
Or, Matan; Van Goethem, Bart; Kitshoff, Adriaan; Koenraadt, Annika; Schwarzkopf, Ilona; Bosmans, Tim; de Rooster, Hilde
2017-04-01
To report the use of negative pressure wound therapy (NPWT) with polyvinyl alcohol (PVA) foam to bolster full-thickness mesh skin grafts in dogs. Retrospective case series. Client-owned dogs (n = 8). Full-thickness mesh skin graft was directly covered with PVA foam. NPWT was maintained for 5 days (in 1 or 2 cycles). Grafts were evaluated on days 2, 5, 10, 15, and 30 for graft appearance and graft take, granulation tissue formation, and complications. Firm attachment of the graft to the recipient bed was accomplished in 7 dogs with granulation tissue quickly filling the mesh holes, and graft take considered excellent. One dog had bandage complications after cessation of the NPWT, causing partial graft loss. The PVA foam did not adhere to the graft or damage the surrounding skin. The application of NPWT with a PVA foam after full-thickness mesh skin grafting in dogs provides an effective method for securing skin grafts, with good graft acceptance. PVA foam can be used as a primary dressing for skin grafts, obviating the need for other interposing materials to protect the graft and the surrounding skin. © 2017 The American College of Veterinary Surgeons.
Acosta-Mesa, Héctor Gabriel; Cruz-Ramírez, Nicandro; Hernández-Jiménez, Rodolfo
2017-01-01
Efforts have been being made to improve the diagnostic performance of colposcopy, trying to help better diagnose cervical cancer, particularly in developing countries. However, improvements in a number of areas are still necessary, such as the time it takes to process the full digital image of the cervix, the performance of the computing systems used to identify different kinds of tissues, and biopsy sampling. In this paper, we explore three different, well-known automatic classification methods (k-Nearest Neighbors, Naïve Bayes, and C4.5), in addition to different data models that take full advantage of this information and improve the diagnostic performance of colposcopy based on acetowhite temporal patterns. Based on the ROC and PRC area scores, the k-Nearest Neighbors and discrete PLA representation performed better than other methods. The values of sensitivity, specificity, and accuracy reached using this method were 60% (95% CI 50–70), 79% (95% CI 71–86), and 70% (95% CI 60–80), respectively. The acetowhitening phenomenon is not exclusive to high-grade lesions, and we have found acetowhite temporal patterns of epithelial changes that are not precancerous lesions but that are similar to positive ones. These findings need to be considered when developing more robust computing systems in the future. PMID:28744318
A Generalized Fast Frequency Sweep Algorithm for Coupled Circuit-EM Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rockway, J D; Champagne, N J; Sharpe, R M
2004-01-14
Frequency domain techniques are popular for analyzing electromagnetics (EM) and coupled circuit-EM problems. These techniques, such as the method of moments (MoM) and the finite element method (FEM), are used to determine the response of the EM portion of the problem at a single frequency. Since only one frequency is solved at a time, it may take a long time to calculate the parameters for wideband devices. In this paper, a fast frequency sweep based on the Asymptotic Wave Expansion (AWE) method is developed and applied to generalized mixed circuit-EM problems. The AWE method, which was originally developed for lumped-loadmore » circuit simulations, has recently been shown to be effective at quasi-static and low frequency full-wave simulations. Here it is applied to a full-wave MoM solver, capable of solving for metals, dielectrics, and coupled circuit-EM problems.« less
Determination of the equilibrium constant of C60 fullerene binding with drug molecules.
Mosunov, Andrei A; Pashkova, Irina S; Sidorova, Maria; Pronozin, Artem; Lantushenko, Anastasia O; Prylutskyy, Yuriy I; Parkinson, John A; Evstigneev, Maxim P
2017-03-01
We report a new analytical method that allows the determination of the magnitude of the equilibrium constant of complexation, K h , of small molecules to C 60 fullerene in aqueous solution. The developed method is based on the up-scaled model of C 60 fullerene-ligand complexation and contains the full set of equations needed to fit titration datasets arising from different experimental methods (UV-Vis spectroscopy, 1 H NMR spectroscopy, diffusion ordered NMR spectroscopy, DLS). The up-scaled model takes into consideration the specificity of C 60 fullerene aggregation in aqueous solution and allows the highly dispersed nature of C 60 fullerene cluster distribution to be accounted for. It also takes into consideration the complexity of fullerene-ligand dynamic equilibrium in solution, formed by various types of self- and hetero-complexes. These features make the suggested method superior to standard Langmuir-type analysis, the approach used to date for obtaining quantitative information on ligand binding with different nanoparticles.
Theory and applications of structured light single pixel imaging
NASA Astrophysics Data System (ADS)
Stokoe, Robert J.; Stockton, Patrick A.; Pezeshki, Ali; Bartels, Randy A.
2018-02-01
Many single-pixel imaging techniques have been developed in recent years. Though the methods of image acquisition vary considerably, the methods share unifying features that make general analysis possible. Furthermore, the methods developed thus far are based on intuitive processes that enable simple and physically-motivated reconstruction algorithms, however, this approach may not leverage the full potential of single-pixel imaging. We present a general theoretical framework of single-pixel imaging based on frame theory, which enables general, mathematically rigorous analysis. We apply our theoretical framework to existing single-pixel imaging techniques, as well as provide a foundation for developing more-advanced methods of image acquisition and reconstruction. The proposed frame theoretic framework for single-pixel imaging results in improved noise robustness, decrease in acquisition time, and can take advantage of special properties of the specimen under study. By building on this framework, new methods of imaging with a single element detector can be developed to realize the full potential associated with single-pixel imaging.
Method for estimating infection route and speed of influenza.
Ijuin, Kazushige; Matsuda, Rieko; Hayashi, Yuzuru
2006-03-01
This paper puts forward a method for estimating the infection route and speed of influenza from the daily variations in the amount of influenza formulations supplied at distant city pharmacies. The cross-correlation function between the time variations at the pharmacies indicates as for the drug sales, how many days a pharmacy lags behind another pharmacy. The comparison of the time lags between the pharmacies can lead to the estimation of the infection route of influenza. Taking into account the distance between the locations of the pharmacies, we can calculate the infection speed of influenza. Three pharmacies located in Tokyo and its vicinity (Saitama and Kanagawa) are taken as an example. The thrust of this paper is to introduce the new strategy that can take full advantage of the information every pharmacy has in possession.
Large Airborne Full Tensor Gradient Data Inversion Based on a Non-Monotone Gradient Method
NASA Astrophysics Data System (ADS)
Sun, Yong; Meng, Zhaohai; Li, Fengting
2018-03-01
Following the development of gravity gradiometer instrument technology, the full tensor gravity (FTG) data can be acquired on airborne and marine platforms. Large-scale geophysical data can be obtained using these methods, making such data sets a number of the "big data" category. Therefore, a fast and effective inversion method is developed to solve the large-scale FTG data inversion problem. Many algorithms are available to accelerate the FTG data inversion, such as conjugate gradient method. However, the conventional conjugate gradient method takes a long time to complete data processing. Thus, a fast and effective iterative algorithm is necessary to improve the utilization of FTG data. Generally, inversion processing is formulated by incorporating regularizing constraints, followed by the introduction of a non-monotone gradient-descent method to accelerate the convergence rate of FTG data inversion. Compared with the conventional gradient method, the steepest descent gradient algorithm, and the conjugate gradient algorithm, there are clear advantages of the non-monotone iterative gradient-descent algorithm. Simulated and field FTG data were applied to show the application value of this new fast inversion method.
Two-Point Turbulence Closure Applied to Variable Resolution Modeling
NASA Technical Reports Server (NTRS)
Girimaji, Sharath S.; Rubinstein, Robert
2011-01-01
Variable resolution methods have become frontline CFD tools, but in order to take full advantage of this promising new technology, more formal theoretical development is desirable. Two general classes of variable resolution methods can be identified: hybrid or zonal methods in which RANS and LES models are solved in different flow regions, and bridging or seamless models which interpolate smoothly between RANS and LES. This paper considers the formulation of bridging methods using methods of two-point closure theory. The fundamental problem is to derive a subgrid two-equation model. We compare and reconcile two different approaches to this goal: the Partially Integrated Transport Model, and the Partially Averaged Navier-Stokes method.
Exact Fan-Beam Reconstruction With Arbitrary Object Translations and Truncated Projections
NASA Astrophysics Data System (ADS)
Hoskovec, Jan; Clackdoyle, Rolf; Desbat, Laurent; Rit, Simon
2016-06-01
This article proposes a new method for reconstructing two-dimensional (2D) computed tomography (CT) images from truncated and motion contaminated sinograms. The type of motion considered here is a sequence of rigid translations which are assumed to be known. The algorithm first identifies the sufficiency of angular coverage in each 2D point of the CT image to calculate the Hilbert transform from the local “virtual” trajectory which accounts for the motion and the truncation. By taking advantage of data redundancy in the full circular scan, our method expands the reconstructible region beyond the one obtained with chord-based methods. The proposed direct reconstruction algorithm is based on the Differentiated Back-Projection with Hilbert filtering (DBP-H). The motion is taken into account during backprojection which is the first step of our direct reconstruction, before taking the derivatives and inverting the finite Hilbert transform. The algorithm has been tested in a proof-of-concept study on Shepp-Logan phantom simulations with several motion cases and detector sizes.
GPU acceleration of particle-in-cell methods
NASA Astrophysics Data System (ADS)
Cowan, Benjamin; Cary, John; Meiser, Dominic
2015-11-01
Graphics processing units (GPUs) have become key components in many supercomputing systems, as they can provide more computations relative to their cost and power consumption than conventional processors. However, to take full advantage of this capability, they require a strict programming model which involves single-instruction multiple-data execution as well as significant constraints on memory accesses. To bring the full power of GPUs to bear on plasma physics problems, we must adapt the computational methods to this new programming model. We have developed a GPU implementation of the particle-in-cell (PIC) method, one of the mainstays of plasma physics simulation. This framework is highly general and enables advanced PIC features such as high order particles and absorbing boundary conditions. The main elements of the PIC loop, including field interpolation and particle deposition, are designed to optimize memory access. We describe the performance of these algorithms and discuss some of the methods used. Work supported by DARPA contract W31P4Q-15-C-0061 (SBIR).
NASA Astrophysics Data System (ADS)
Petersen, Ø. W.; Øiseth, O.; Nord, T. S.; Lourens, E.
2018-07-01
Numerical predictions of the dynamic response of complex structures are often uncertain due to uncertainties inherited from the assumed load effects. Inverse methods can estimate the true dynamic response of a structure through system inversion, combining measured acceleration data with a system model. This article presents a case study of full-field dynamic response estimation of a long-span floating bridge: the Bergøysund Bridge in Norway. This bridge is instrumented with a network of 14 triaxial accelerometers. The system model consists of 27 vibration modes with natural frequencies below 2 Hz, obtained from a tuned finite element model that takes the fluid-structure interaction with the surrounding water into account. Two methods, a joint input-state estimation algorithm and a dual Kalman filter, are applied to estimate the full-field response of the bridge. The results demonstrate that the displacements and the accelerations can be estimated at unmeasured locations with reasonable accuracy when the wave loads are the dominant source of excitation.
ERIC Educational Resources Information Center
Crawley, Sara L.
2008-01-01
For this essay, the author takes as an organizing premise Jodi O'Brien and Judith A. Howard's notion of responsible authority--that "teaching is a value-based activity" in which educators should be striving to engage students in academic pursuits in order to create a moral citizenry. That is, educators need to acknowledge that they wield the power…
Efficient sidelobe ASK based dual-function radar-communications
NASA Astrophysics Data System (ADS)
Hassanien, Aboulnasr; Amin, Moeness G.; Zhang, Yimin D.; Ahmad, Fauzia
2016-05-01
Recently, dual-function radar-communications (DFRC) has been proposed as means to mitigate the spectrum congestion problem. Existing amplitude-shift keying (ASK) methods for information embedding do not take full advantage of the highest permissable sidelobe level. In this paper, a new ASK-based signaling strategy for enhancing the signal-to-noise ratio (SNR) at the communication receiver is proposed. The proposed method employs one reference waveform and simultaneously transmits a number of orthogonal waveforms equals to the number of 1's in the binary sequence being embedded. 3 dB SNR gain is achieved using the proposed method as compared to existing sidelobe ASK methods. The effectiveness of the proposed information embedding strategy is verified using simulations examples.
Evaluating Open-Source Full-Text Search Engines for Matching ICD-10 Codes.
Jurcău, Daniel-Alexandru; Stoicu-Tivadar, Vasile
2016-01-01
This research presents the results of evaluating multiple free, open-source engines on matching ICD-10 diagnostic codes via full-text searches. The study investigates what it takes to get an accurate match when searching for a specific diagnostic code. For each code the evaluation starts by extracting the words that make up its text and continues with building full-text search queries from the combinations of these words. The queries are then run against all the ICD-10 codes until a match indicates the code in question as a match with the highest relative score. This method identifies the minimum number of words that must be provided in order for the search engines choose the desired entry. The engines analyzed include a popular Java-based full-text search engine, a lightweight engine written in JavaScript which can even execute on the user's browser, and two popular open-source relational database management systems.
The design of multirate digital control systems
NASA Technical Reports Server (NTRS)
Berg, M. C.
1986-01-01
The successive loop closures synthesis method is the only method for multirate (MR) synthesis in common use. A new method for MR synthesis is introduced which requires a gradient-search solution to a constrained optimization problem. Some advantages of this method are that the control laws for all control loops are synthesized simultaneously, taking full advantage of all cross-coupling effects, and that simple, low-order compensator structures are easily accomodated. The algorithm and associated computer program for solving the constrained optimization problem are described. The successive loop closures , optimal control, and constrained optimization synthesis methods are applied to two example design problems. A series of compensator pairs are synthesized for each example problem. The succesive loop closure, optimal control, and constrained optimization synthesis methods are compared, in the context of the two design problems.
Vasil'ev, G F
2013-01-01
Owing to methodical disadvantages, the theory of control still lacks the potential for the analysis of biological systems. To get the full benefit of the method in addition to the algorithmic model of control (as of today the only used model in the theory of control) a parametric model of control is offered to employ. The reasoning for it is explained. The approach suggested provides the possibility to use all potential of the modern theory of control for the analysis of biological systems. The cybernetic approach is shown taking a system of the rise of glucose concentration in blood as an example.
A New Platform for Investigating In-Situ NIR Reflectance in Snow
NASA Astrophysics Data System (ADS)
Johnson, M.; Taubenheim, J. R. L.; Stevenson, R.; Eldred, D.
2017-12-01
In-situ near infrared (NIR) reflectance measurements of the snowpack have been shown to have correlations to valuable snowpack properties. To-date many studies take these measurements by digging a pit and setting up a NIR camera to take images of the wall. This setup is cumbersome, making it challenging to investigate things like spatial variability. Over the course of 3 winters, a new device has been developed capable of mitigating some of the downfalls of NIR open pit photography. This new instrument is a NIR profiler capable of taking NIR reflectance measurements without digging a pit, with most measurements taking less than 30 seconds to retrieve data. The latest prototype is built into a ski pole and automatically transfers data wirelessly to the users smartphone. During 2016-2017 winter, the device was used by 37 different users resulting in over 4000 measurements in the Western United States, demonstrating a dramatic reduction in time to data when compared to other methods. Presented here are some initial findings from a full winter of using the ski pole version of this device.
A "Stepping Stone" Approach for Obtaining Quantum Free Energies of Hydration.
Sampson, Chris; Fox, Thomas; Tautermann, Christofer S; Woods, Christopher; Skylaris, Chris-Kriton
2015-06-11
We present a method which uses DFT (quantum, QM) calculations to improve free energies of binding computed with classical force fields (classical, MM). To overcome the incomplete overlap of configurational spaces between MM and QM, we use a hybrid Monte Carlo approach to generate quickly correct ensembles of structures of intermediate states between a MM and a QM/MM description, hence taking into account a great fraction of the electronic polarization of the quantum system, while being able to use thermodynamic integration to compute the free energy of transition between the MM and QM/MM. Then, we perform a final transition from QM/MM to full QM using a one-step free energy perturbation approach. By using QM/MM as a stepping stone toward the full QM description, we find very small convergence errors (<1 kJ/mol) in the transition to full QM. We apply this method to compute hydration free energies, and we obtain consistent improvements over the MM values for all molecules we used in this study. This approach requires large-scale DFT calculations as the full QM systems involved the ligands and all waters in their simulation cells, so the linear-scaling DFT code ONETEP was used for these calculations.
Thermal Insulation Test Apparatuses
NASA Technical Reports Server (NTRS)
Berman, Brion
2005-01-01
The National Aeronautics and Space Administration (NASA) seeks to license its Thermal Insulation Test Apparatuses. Designed by the Cryogenics Test Laboratory at the John F. Kennedy Space Center (KSC) in Florida, these patented technologies (U.S. Patent Numbers: Cryostat 1 - 6,742,926, Cryostat 2 - 6,487,866, and Cryostat 4 - 6,824,306) allow manufacturers to fabricate and test cryogenic insulation at their production and/or laboratory facilities. These new inventions allow for the thermal performance characterization of cylindrical and flat specimens (e.g., bulk-fill, flat-panel, multilayer, or continuously rolled) over the full range of pressures, from high vacuum to no vacuum, and over the full range of temperatures from 77K to 300K. In today's world, efficient, low-maintenance, low-temperature refrigeration is taking a more significant role, from the food industry, transportation, energy, and medical applications to the Space Shuttle. Most countries (including the United States) have laws requiring commercially available insulation materials to be tested and rated by an accepted methodology. The new Cryostat methods go beyond the formal capabilities of the ASTM methods to provide testing for real systems, including full-temperature differences plus full-range vacuum conditions.
Novel Hyperspectral Anomaly Detection Methods Based on Unsupervised Nearest Regularized Subspace
NASA Astrophysics Data System (ADS)
Hou, Z.; Chen, Y.; Tan, K.; Du, P.
2018-04-01
Anomaly detection has been of great interest in hyperspectral imagery analysis. Most conventional anomaly detectors merely take advantage of spectral and spatial information within neighboring pixels. In this paper, two methods of Unsupervised Nearest Regularized Subspace-based with Outlier Removal Anomaly Detector (UNRSORAD) and Local Summation UNRSORAD (LSUNRSORAD) are proposed, which are based on the concept that each pixel in background can be approximately represented by its spatial neighborhoods, while anomalies cannot. Using a dual window, an approximation of each testing pixel is a representation of surrounding data via a linear combination. The existence of outliers in the dual window will affect detection accuracy. Proposed detectors remove outlier pixels that are significantly different from majority of pixels. In order to make full use of various local spatial distributions information with the neighboring pixels of the pixels under test, we take the local summation dual-window sliding strategy. The residual image is constituted by subtracting the predicted background from the original hyperspectral imagery, and anomalies can be detected in the residual image. Experimental results show that the proposed methods have greatly improved the detection accuracy compared with other traditional detection method.
DOMe: A deduplication optimization method for the NewSQL database backups
Wang, Longxiang; Zhu, Zhengdong; Zhang, Xingjun; Wang, Yinfeng
2017-01-01
Reducing duplicated data of database backups is an important application scenario for data deduplication technology. NewSQL is an emerging database system and is now being used more and more widely. NewSQL systems need to improve data reliability by periodically backing up in-memory data, resulting in a lot of duplicated data. The traditional deduplication method is not optimized for the NewSQL server system and cannot take full advantage of hardware resources to optimize deduplication performance. A recent research pointed out that the future NewSQL server will have thousands of CPU cores, large DRAM and huge NVRAM. Therefore, how to utilize these hardware resources to optimize the performance of data deduplication is an important issue. To solve this problem, we propose a deduplication optimization method (DOMe) for NewSQL system backup. To take advantage of the large number of CPU cores in the NewSQL server to optimize deduplication performance, DOMe parallelizes the deduplication method based on the fork-join framework. The fingerprint index, which is the key data structure in the deduplication process, is implemented as pure in-memory hash table, which makes full use of the large DRAM in NewSQL system, eliminating the performance bottleneck problem of fingerprint index existing in traditional deduplication method. The H-store is used as a typical NewSQL database system to implement DOMe method. DOMe is experimentally analyzed by two representative backup data. The experimental results show that: 1) DOMe can reduce the duplicated NewSQL backup data. 2) DOMe significantly improves deduplication performance by parallelizing CDC algorithms. In the case of the theoretical speedup ratio of the server is 20.8, the speedup ratio of DOMe can achieve up to 18; 3) DOMe improved the deduplication throughput by 1.5 times through the pure in-memory index optimization method. PMID:29049307
Full waveform inversion in the frequency domain using classified time-domain residual wavefields
NASA Astrophysics Data System (ADS)
Son, Woohyun; Koo, Nam-Hyung; Kim, Byoung-Yeop; Lee, Ho-Young; Joo, Yonghwan
2017-04-01
We perform the acoustic full waveform inversion in the frequency domain using residual wavefields that have been separated in the time domain. We sort the residual wavefields in the time domain according to the order of absolute amplitudes. Then, the residual wavefields are separated into several groups in the time domain. To analyze the characteristics of the residual wavefields, we compare the residual wavefields of conventional method with those of our residual separation method. From the residual analysis, the amplitude spectrum obtained from the trace before separation appears to have little energy at the lower frequency bands. However, the amplitude spectrum obtained from our strategy is regularized by the separation process, which means that the low-frequency components are emphasized. Therefore, our method helps to emphasize low-frequency components of residual wavefields. Then, we generate the frequency-domain residual wavefields by taking the Fourier transform of the separated time-domain residual wavefields. With these wavefields, we perform the gradient-based full waveform inversion in the frequency domain using back-propagation technique. Through a comparison of gradient directions, we confirm that our separation method can better describe the sub-salt image than the conventional approach. The proposed method is tested on the SEG/EAGE salt-dome model. The inversion results show that our algorithm is better than the conventional gradient based waveform inversion in the frequency domain, especially for deeper parts of the velocity model.
NASA Astrophysics Data System (ADS)
Sjöberg, Daniel; Larsson, Christer
2015-06-01
We present a method aimed at reducing uncertainties and instabilities when characterizing materials in waveguide setups. The method is based on measuring the S parameters for three different orientations of a rectangular sample block in a rectangular waveguide. The corresponding geometries are modeled in a commercial full-wave simulation program, taking any material parameters as input. The material parameters of the sample are found by minimizing the squared distance between measured and calculated S parameters. The information added by the different sample orientations is quantified using the Cramér-Rao lower bound. The flexibility of the method allows the determination of material parameters of an arbitrarily shaped sample that fits in the waveguide.
An automated and universal method for measuring mean grain size from a digital image of sediment
Buscombe, Daniel D.; Rubin, David M.; Warrick, Jonathan A.
2010-01-01
Existing methods for estimating mean grain size of sediment in an image require either complicated sequences of image processing (filtering, edge detection, segmentation, etc.) or statistical procedures involving calibration. We present a new approach which uses Fourier methods to calculate grain size directly from the image without requiring calibration. Based on analysis of over 450 images, we found the accuracy to be within approximately 16% across the full range from silt to pebbles. Accuracy is comparable to, or better than, existing digital methods. The new method, in conjunction with recent advances in technology for taking appropriate images of sediment in a range of natural environments, promises to revolutionize the logistics and speed at which grain-size data may be obtained from the field.
Improved finite difference schemes for transonic potential calculations
NASA Technical Reports Server (NTRS)
Hafez, M.; Osher, S.; Whitlow, W., Jr.
1984-01-01
Engquist and Osher (1980) have introduced a finite difference scheme for solving the transonic small disturbance equation, taking into account cases in which only compression shocks are admitted. Osher et al. (1983) studied a class of schemes for the full potential equation. It is proved that these schemes satisfy a new discrete 'entropy inequality' which rules out expansion shocks. However, the conducted analysis is restricted to steady two-dimensional flows. The present investigation is concerned with the adoption of a heuristic approach. The full potential equation in conservation form is solved with the aid of a modified artificial density method, based on flux biasing. It is shown that, with the current scheme, expansion shocks are not possible.
NASA Astrophysics Data System (ADS)
Guo, Y.; Liu, J.; Mauzerall, D. L.; Emmons, L. K.; Horowitz, L. W.; Fan, S.; Li, X.; Tao, S.
2014-12-01
Long-range transport of ozone is of great concern, yet the source-receptor relationships derived previously depend strongly on the source attribution techniques used. Here we describe a new tagged ozone mechanism (full-tagged), the design of which seeks to take into account the combined effects of emissions of ozone precursors, CO, NOx and VOCs, from a particular source, while keeping the current state of chemical equilibrium unchanged. We label emissions from the target source (A) and background (B). When two species from A and B sources react with each other, half of the resulting products are labeled A, and half B. Thus the impact of a given source on downwind regions is recorded through tagged chemistry. We then incorporate this mechanism into the Model for Ozone and Related chemical Tracers (MOZART-4) to examine the impact of anthropogenic emissions within North America, Europe, East Asia and South Asia on ground-level ozone downwind of source regions during 1999-2000. We compare our results with two previously used methods -- the sensitivity and tagged-N approaches. The ozone attributed to a given source by the full-tagged method is more widely distributed spatially, but has weaker seasonal variability than that estimated by the other methods. On a seasonal basis, for most source/receptor pairs, the full-tagged method estimates the largest amount of tagged ozone, followed by the sensitivity and tagged-N methods. In terms of trans-Pacific influence of ozone pollution, the full-tagged method estimates the strongest impact of East Asian (EA) emissions on the western U.S. (WUS) in MAM and JJA (~3 ppbv), which is substantially different in magnitude and seasonality from tagged-N and sensitivity studies. This difference results from the full-tagged method accounting for the maintenance of peroxy radicals (e.g., CH3O2, CH3CO3, and HO2), in addition to NOy, as effective reservoirs of EA source impact across the Pacific, allowing for a significant contribution to ozone formation over WUS (particularly in summer). Thus, the full-tagged method, with its clear discrimination of source and background contributions on a per-reaction basis, provides unique insights into the critical role of VOCs (and additional reactive nitrogen species) in determining the nonlinear inter-continental influence of ozone pollution.
Diver Operated Tools and Applications for Underwater Construction
1987-01-01
subsurface construction. rhe list is by no means exhaustive and new 3 methods and requirements continue to evolve. * 8 I NCUAPTUN TIM DIVINO OPMATIONS...length suit that permitted the exhaust air to escape under the hem. By 1840, Siebe made a full length waterproof suit and added an exhaust valve to...The open circuit scuba takes 3 air from the supply tank, is inhaled by th& diver, and then exhausted directly to the surrounding water. 3 The basic
NASA Astrophysics Data System (ADS)
Abbas, Mahmoud I.; Badawi, M. S.; Ruskov, I. N.; El-Khatib, A. M.; Grozdanov, D. N.; Thabet, A. A.; Kopatch, Yu. N.; Gouda, M. M.; Skoy, V. R.
2015-01-01
Gamma-ray detector systems are important instruments in a broad range of science and new setup are continually developing. The most recent step in the evolution of detectors for nuclear spectroscopy is the construction of large arrays of detectors of different forms (for example, conical, pentagonal, hexagonal, etc.) and sizes, where the performance and the efficiency can be increased. In this work, a new direct numerical method (NAM), in an integral form and based on the efficiency transfer (ET) method, is used to calculate the full-energy peak efficiency of a single hexagonal NaI(Tl) detector. The algorithms and the calculations of the effective solid angle ratios for a point (isotropic irradiating) gamma-source situated coaxially at different distances from the detector front-end surface, taking into account the attenuation of the gamma-rays in the detector's material, end-cap and the other materials in-between the gamma-source and the detector, are considered as the core of this (ET) method. The calculated full-energy peak efficiency values by the (NAM) are found to be in a good agreement with the measured experimental data.
Optimization of joint energy micro-grid with cold storage
NASA Astrophysics Data System (ADS)
Xu, Bin; Luo, Simin; Tian, Yan; Chen, Xianda; Xiong, Botao; Zhou, Bowen
2018-02-01
To accommodate distributed photovoltaic (PV) curtailment, to make full use of the joint energy micro-grid with cold storage, and to reduce the high operating costs, the economic dispatch of joint energy micro-grid load is particularly important. Considering the different prices during the peak and valley durations, an optimization model is established, which takes the minimum production costs and PV curtailment fluctuations as the objectives. Linear weighted sum method and genetic-taboo Particle Swarm Optimization (PSO) algorithm are used to solve the optimization model, to obtain optimal power supply output. Taking the garlic market in Henan as an example, the simulation results show that considering distributed PV and different prices in different time durations, the optimization strategies are able to reduce the operating costs and accommodate PV power efficiently.
Quantum Dynamics of Solitons in Strongly Interacting Systems on Optical Lattices
NASA Astrophysics Data System (ADS)
Rubbo, Chester; Balakrishnan, Radha; Reinhardt, William; Satija, Indubala; Rey, Ana; Manmana, Salvatore
2012-06-01
We present results of the quantum dynamics of solitons in XXZ spin-1/2 systems which in general can be derived from a system of spinless fermions or hard-core bosons (HCB) with nearest neighbor interaction on a lattice. A mean-field treatment using spin-coherent states revealed analytic solutions of both bright and dark solitons [1]. We take these solutions and apply a full quantum evolution using the adaptive time-dependent density matrix renormalization group method (adaptive t-DMRG), which takes into account the effect of strong correlations. We use local spin observables, correlations functions, and entanglement entropies as measures for the stability of these soliton solutions over the simulation times. [4pt] [1] R. Balakrishnan, I.I. Satija, and C.W. Clark, Phys. Rev. Lett. 103, 230403 (2009).
NASA Astrophysics Data System (ADS)
Delgado, Carlos; Cátedra, Manuel Felipe
2018-05-01
This work presents a technique that allows a very noticeable relaxation of the computational requirements for full-wave electromagnetic simulations based on the Method of Moments. A ray-tracing analysis of the geometry is performed in order to extract the critical points with significant contributions. These points are then used to generate a reduced mesh, considering the regions of the geometry that surround each critical point and taking into account the electrical path followed from the source. The electromagnetic analysis of the reduced mesh produces very accurate results, requiring a fraction of the resources that the conventional analysis would utilize.
Prospect Theory and Interval-Valued Hesitant Set for Safety Evacuation Model
NASA Astrophysics Data System (ADS)
Kou, Meng; Lu, Na
2018-01-01
The study applies the research results of prospect theory and multi attribute decision making theory, combined with the complexity, uncertainty and multifactor influence of the underground mine fire system and takes the decision makers’ psychological behavior of emotion and intuition into full account to establish the intuitionistic fuzzy multiple attribute decision making method that is based on the prospect theory. The model established by this method can explain the decision maker’s safety evacuation decision behavior in the complex system of underground mine fire due to the uncertainty of the environment, imperfection of the information and human psychological behavior and other factors.
NASA Astrophysics Data System (ADS)
Zheng, Y.; Chen, J.
2018-06-01
Variable stiffness composite structures take full advantages of composite’s design ability. An enlarged design space will make the structure’s performance more excellent. Through an optimal design of a variable stiffness cylinder, the buckling capacity of the cylinder will be increased as compared with its constant stiffness counterpart. In this paper, variable stiffness composite cylinders sustaining combined loadings are considered, and the optimization is conducted based on the multi-objective optimization method. The results indicate that variable stiffness cylinder’s loading capacity is increased significantly as compared with the constant stiffness, especially when an inhomogeneous loading is considered.
Uncertainty Modeling and Evaluation of CMM Task Oriented Measurement Based on SVCMM
NASA Astrophysics Data System (ADS)
Li, Hongli; Chen, Xiaohuai; Cheng, Yinbao; Liu, Houde; Wang, Hanbin; Cheng, Zhenying; Wang, Hongtao
2017-10-01
Due to the variety of measurement tasks and the complexity of the errors of coordinate measuring machine (CMM), it is very difficult to reasonably evaluate the uncertainty of the measurement results of CMM. It has limited the application of CMM. Task oriented uncertainty evaluation has become a difficult problem to be solved. Taking dimension measurement as an example, this paper puts forward a practical method of uncertainty modeling and evaluation of CMM task oriented measurement (called SVCMM method). This method makes full use of the CMM acceptance or reinspection report and the Monte Carlo computer simulation method (MCM). The evaluation example is presented, and the results are evaluated by the traditional method given in GUM and the proposed method, respectively. The SVCMM method is verified to be feasible and practical. It can help CMM users to conveniently complete the measurement uncertainty evaluation through a single measurement cycle.
Vakalis, Stergios; Moustakas, Konstantinos; Loizidou, Maria
2018-06-01
Waste-to-energy plants have the peculiarity of being considered both as energy production and as waste destruction facilities and this distinction is important for legislative reasons. The efficiency of waste-to-energy plants must be objective and consistent, independently if the focus is the production of energy, the destruction of waste or the recovery/upgrade of materials. With the introduction of polygeneration technologies, like gasification, the production of energy and the recovery/upgrade of materials, are interconnected. The existing methodology for assessing the efficiency of waste-to-energy plants is the R1 formula, which does not take into consideration the full spectrum of the operations that take place in waste-to-energy plants. This study introduces a novel methodology for assessing the efficiency of waste-to-energy plants and is defined as the 3T method, which stands for 'trapezoidal thermodynamic technique'. The 3T method is an integrated approach for assessing the efficiency of waste-to-energy plants, which takes into consideration not only the production of energy but also the quality of the products. The value that is returned from the 3T method can be placed in a tertiary diagram and the global efficiency map of waste-to-energy plants can be produced. The application of the 3T method showed that the waste-to-energy plants with high combined heat and power efficiency and high recovery of materials are favoured and these outcomes are in accordance with the cascade principle and with the high cogeneration standards that are set by the EU Energy Efficiency Directive.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-08
... years to authorize the incidental, but not intentional, taking of three stocks of marine mammals listed..., shall for a period of up to three years allow the incidental taking of marine mammal species listed... full year post- Take Reduction Plan implementation (October 30, 1997)), through December 31, 2011. This...
NASA Technical Reports Server (NTRS)
Willsky, A. S.; Deyst, J. J.; Crawford, B. S.
1975-01-01
The paper describes two self-test procedures applied to the problem of estimating the biases in accelerometers and gyroscopes on an inertial platform. The first technique is the weighted sum-squared residual (WSSR) test, with which accelerator bias jumps are easily isolated, but gyro bias jumps are difficult to isolate. The WSSR method does not take full advantage of the knowledge of system dynamics. The other technique is a multiple hypothesis method developed by Buxbaum and Haddad (1969). It has the advantage of directly providing jump isolation information, but suffers from computational problems. It might be possible to use the WSSR to detect state jumps and then switch to the BH system for jump isolation and estimate compensation.
NASA Astrophysics Data System (ADS)
Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves
2009-03-01
This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.
A framework for quantifying net benefits of alternative prognostic models.
Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G
2012-01-30
New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd.
A framework for quantifying net benefits of alternative prognostic models‡
Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G
2012-01-01
New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd. PMID:21905066
NASA Technical Reports Server (NTRS)
Goorjian, Peter M.; Silberberg, Yaron; Kwak, Dochan (Technical Monitor)
1994-01-01
This paper will present results in computational nonlinear optics. An algorithm will be described that solves the full vector nonlinear Maxwell's equations exactly without the approximations that are currently made. Present methods solve a reduced scalar wave equation, namely the nonlinear Schrodinger equation, and neglect the optical carrier. Also, results will be shown of calculations of 2-D electromagnetic nonlinear waves computed by directly integrating in time the nonlinear vector Maxwell's equations. The results will include simulations of 'light bullet' like pulses. Here diffraction and dispersion will be counteracted by nonlinear effects. The time integration efficiently implements linear and nonlinear convolutions for the electric polarization, and can take into account such quantum effects as Kerr and Raman interactions. The present approach is robust and should permit modeling 2-D and 3-D optical soliton propagation, scattering, and switching directly from the full-vector Maxwell's equations.
NASA Technical Reports Server (NTRS)
Goorjian, Peter M.; Silberberg, Yaron; Kwak, Dochan (Technical Monitor)
1995-01-01
This paper will present results in computational nonlinear optics. An algorithm will be described that solves the full vector nonlinear Maxwell's equations exactly without the approximations that we currently made. Present methods solve a reduced scalar wave equation, namely the nonlinear Schrodinger equation, and neglect the optical carrier. Also, results will be shown of calculations of 2-D electromagnetic nonlinear waves computed by directly integrating in time the nonlinear vector Maxwell's equations. The results will include simulations of 'light bullet' like pulses. Here diffraction and dispersion will be counteracted by nonlinear effects. The time integration efficiently implements linear and nonlinear convolutions for the electric polarization, and can take into account such quantum effects as Karr and Raman interactions. The present approach is robust and should permit modeling 2-D and 3-D optical soliton propagation, scattering, and switching directly from the full-vector Maxwell's equations.
MKID digital readout tuning with deep learning
NASA Astrophysics Data System (ADS)
Dodkins, R.; Mahashabde, S.; O'Brien, K.; Thatte, N.; Fruitwala, N.; Walter, A. B.; Meeker, S. R.; Szypryt, P.; Mazin, B. A.
2018-04-01
Microwave Kinetic Inductance Detector (MKID) devices offer inherent spectral resolution, simultaneous read out of thousands of pixels, and photon-limited sensitivity at optical wavelengths. Before taking observations the readout power and frequency of each pixel must be individually tuned, and if the equilibrium state of the pixels change, then the readout must be retuned. This process has previously been performed through manual inspection, and typically takes one hour per 500 resonators (20 h for a ten-kilo-pixel array). We present an algorithm based on a deep convolution neural network (CNN) architecture to determine the optimal bias power for each resonator. The bias point classifications from this CNN model, and those from alternative automated methods, are compared to those from human decisions, and the accuracy of each method is assessed. On a test feed-line dataset, the CNN achieves an accuracy of 90% within 1 dB of the designated optimal value, which is equivalent accuracy to a randomly selected human operator, and superior to the highest scoring alternative automated method by 10%. On a full ten-kilopixel array, the CNN performs the characterization in a matter of minutes - paving the way for future mega-pixel MKID arrays.
Teaching history taking to medical students: a systematic review.
Keifenheim, Katharina E; Teufel, Martin; Ip, Julianne; Speiser, Natalie; Leehr, Elisabeth J; Zipfel, Stephan; Herrmann-Werner, Anne
2015-09-28
This paper is an up-to-date systematic review on educational interventions addressing history taking. The authors noted that despite the plethora of specialized training programs designed to enhance students' interviewing skills there had not been a review of the literature to assess the quality of each published method of teaching history taking in undergraduate medical education based on the evidence of the program's efficacy. The databases PubMed, PsycINFO, Google Scholar, opengrey, opendoar and SSRN were searched using key words related to medical education and history taking. Articles that described an educational intervention to improve medical students' history-taking skills were selected and reviewed. Included studies had to evaluate learning progress. Study quality was assessed using the Medical Education Research Study Quality Instrument (MERSQI). Seventy-eight full-text articles were identified and reviewed; of these, 23 studies met the final inclusion criteria. Three studies applied an instructional approach using scripts, lectures, demonstrations and an online course. Seventeen studies applied a more experiential approach by implementing small group workshops including role-play, interviews with patients and feedback. Three studies applied a creative approach. Two of these studies made use of improvisational theatre and one introduced a simulation using Lego® building blocks. Twenty-two studies reported an improvement in students' history taking skills. Mean MERSQI score was 10.4 (range 6.5 to 14; SD = 2.65). These findings suggest that several different educational interventions are effective in teaching history taking skills to medical students. Small group workshops including role-play and interviews with real patients, followed by feedback and discussion, are widespread and best investigated. Feedback using videotape review was also reported as particularly instructive. Students in the early preclinical state might profit from approaches helping them to focus on interview skills and not being distracted by thinking about differential diagnoses or clinical management. The heterogeneity of outcome data and the varied ways of assessment strongly suggest the need for further research as many studies did not meet basic methodological criteria. Randomized controlled trials using external assessment methods, standardized measurement tools and reporting long-term data are recommended to evaluate the efficacy of courses on history taking.
Pappas, Yannis; Wei, Igor; Car, Josip; Majeed, Azeem; Sheikh, Aziz
2011-12-07
Diabetes is a chronic illness characterised by insulin resistance or deficiency, resulting in elevated glycosylated haemoglobin A1c (HbA1c) levels. Because diabetes tends to run in families, the collection of data is an important tool for identifying people with elevated risk of type2 diabetes. Traditionally, oral-and-written data collection methods are employed but computer-assisted history taking systems (CAHTS) are increasingly used. Although CAHTS were first described in the 1960s, there remains uncertainty about the impact of these methods on family history taking, clinical care and patient outcomes such as health-related quality of life. To assess the effectiveness of computer-assisted versus oral-and-written family history taking for identifying people with elevated risk of developing type 2 diabetes mellitus. We searched The Cochrane Library (issue 6, 2011), MEDLINE (January 1985 to June 2011), EMBASE (January 1980 to June 2011) and CINAHL (January 1981 to June 2011). Reference lists of obtained articles were also pursued further and no limits were imposed on languages and publication status. Randomised controlled trials of computer-assisted versus oral-and-written history taking in adult participants (16 years and older). Two authors independently scanned the title and abstract of retrieved articles. Potentially relevant articles were investigated as full text. Studies that met the inclusion criteria were abstracted for relevant population and intervention characteristics with any disagreements resolved by discussion, or by a third party. Risk of bias was similarly assessed independently. We found no controlled trials on computer-assisted versus oral-and-written family history taking for identifying people with elevated risk of type 2 diabetes mellitus. There is a need to develop an evidence base to support the effective development and use of computer-assisted history taking systems in this area of practice. In the absence of evidence on effectiveness, the implementation of computer-assisted family history taking for identifying people with elevated risk of type 2 diabetes may only rely on the clinicians' tacit knowledge, published monographs and viewpoint articles.
Myers, S R; Grady, J; Soranzo, C; Sanders, R; Green, C; Leigh, I M; Navsaria, H A
1997-01-01
The clinical take rates of cultured keratinocyte autografts are poor on a full-thickness wound unless a dermal bed is provided. Even under these circumstances two important problems are the time delay in growing autografts and the fragility of the grafts. A laser-perforated hyaluronic acid membrane delivery system allows grafting at early confluence without requiring dispase digestion to release grafts from their culture dishes. We designed this study to investigate the influence of this membrane on clinical take rates in an established porcine kerato-dermal grafting model. The study demonstrated a significant reduction in take as a result of halving the keratinocyte seeding density onto the membrane. The take rates, however, of grafts grown on the membrane at half or full conventional seeding density and transplanted to a dermal wound bed were comparable, if not better, than those of keratinocyte sheet grafts.
NASA Astrophysics Data System (ADS)
Zhu, Jing; Zhou, Zebo; Li, Yong; Rizos, Chris; Wang, Xingshu
2016-07-01
An improvement of the attitude difference method (ADM) to estimate deflections of the vertical (DOV) in real time is described in this paper. The ADM without offline processing estimates the DOV with a limited accuracy due to the response delay. The proposed model selection-based self-adaptive delay feedback (SDF) method takes the results of the ADM as the a priori information, then uses fitting and extrapolation to estimate the DOV at the current epoch. The active region selection factor F th is used to take full advantage of the Earth model EGM2008 and the SDF with different DOV exhibitions. The factors which affect the DOV estimation accuracy are analyzed and modeled. An external observation which is specified by the velocity difference between the global navigation satellite system (GNSS) and the inertial navigation system (INS) with DOV compensated is used to select the optimal model. The response delay induced by the weak observability of an integrated INS/GNSS to the violent DOV disturbances in the ADM is compensated. The DOV estimation accuracy of the SDF method is improved by approximately 40% and 50% respectively compared to that of the EGM2008 and the ADM. With an increase in GNSS accuracy, the DOV estimation accuracy could improve further.
NASA Astrophysics Data System (ADS)
Perton, Mathieu; Contreras-Zazueta, Marcial A.; Sánchez-Sesma, Francisco J.
2016-06-01
A new implementation of indirect boundary element method allows simulating the elastic wave propagation in complex configurations made of embedded regions that are homogeneous with irregular boundaries or flat layers. In an older implementation, each layer of a flat layered region would have been treated as a separated homogeneous region without taking into account the flat boundary information. For both types of regions, the scattered field results from fictitious sources positioned along their boundaries. For the homogeneous regions, the fictitious sources emit as in a full-space and the wave field is given by analytical Green's functions. For flat layered regions, fictitious sources emit as in an unbounded flat layered region and the wave field is given by Green's functions obtained from the discrete wavenumber (DWN) method. The new implementation allows then reducing the length of the discretized boundaries but DWN Green's functions require much more computation time than the full-space Green's functions. Several optimization steps are then implemented and commented. Validations are presented for 2-D and 3-D problems. Higher efficiency is achieved in 3-D.
Viscoacoustic anisotropic full waveform inversion
NASA Astrophysics Data System (ADS)
Qu, Yingming; Li, Zhenchun; Huang, Jianping; Li, Jinli
2017-01-01
A viscoacoustic vertical transverse isotropic (VTI) quasi-differential wave equation, which takes account for both the viscosity and anisotropy of media, is proposed for wavefield simulation in this study. The finite difference method is used to solve the equations, for which the attenuation terms are solved in the wavenumber domain, and all remaining terms in the time-space domain. To stabilize the adjoint wavefield, robust regularization operators are applied to the wave equation to eliminate the high-frequency component of the numerical noise produced during the backward propagation of the viscoacoustic wavefield. Based on these strategies, we derive the corresponding gradient formula and implement a viscoacoustic VTI full waveform inversion (FWI). Numerical tests verify that our proposed viscoacoustic VTI FWI can produce accurate and stable inversion results for viscoacoustic VTI data sets. In addition, we test our method's sensitivity to velocity, Q, and anisotropic parameters. Our results show that the sensitivity to velocity is much higher than that to Q and anisotropic parameters. As such, our proposed method can produce acceptable inversion results as long as the Q and anisotropic parameters are within predefined thresholds.
50 CFR 218.11 - Permissible methods of taking.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Permissible methods of taking. 218.11... Range Complex) § 218.11 Permissible methods of taking. (a) Under Letters of Authorization issued... following species, by the indicated method of take and the indicated number of times: (1) Level B Harassment...
ERIC Educational Resources Information Center
Journal of Chemical Education, 2000
2000-01-01
This activity takes students through the process of fermentation. Requires an entire month for the full reaction to take place. The reaction, catalyzed by bacterial enzymes, produces lactic acid from glucose. (SAH)
NASA Technical Reports Server (NTRS)
Lingbloom, Mike S.
2008-01-01
During redesign of the Space Shuttle reusable solid rocket motor (RSRM), NASA amended the contract with ATK Launch Systems (then Morton Thiokol Inc.) with Change Order 966 to implement a contamination control and cleanliness verification method. The change order required: (1) A quantitative inspection method (2) A written record of actual contamination levels versus a known reject level (3) A method that is more sensitive than existing methods of visual and black light inspection. Black light inspection is only useful for inspection of contaminants that fluoresce near the 365 nm spectral line and is not useful for inspection of most silicones that will not produce strong fluorescence. Black light inspection conducted by a qualified inspector under controlled light is capable of detecting Conoco HD-2 grease in gross amounts and is very subjective due to operator sensitivity. Optically stimulated electron emission (OSEE), developed at the Materials and Process Laboratory at Marshall Space Flight Center (MSFC), was selected to satisfy Change Order 966. OSEE offers several important advantages over existing laboratory methods with similar sensitivity, e.g., spectroscopy and nonvolatile residue sampling, which provide turn around time, real time capability, and full coverage inspection capability. Laboratory methods require sample gathering and in-lab analysis, which sometimes takes several days to get results. This is not practical in a production environment. In addition, these methods do not offer full coverage inspection of the large components
50 CFR 216.242 - Permissible methods of taking.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 10 2012-10-01 2012-10-01 false Permissible methods of taking. 216.242...) § 216.242 Permissible methods of taking. (a) Under Letters of Authorization issued pursuant to §§ 216... species, by the identified method of take and the indicated number of times: (1) Level B Harassment (±10...
50 CFR 216.242 - Permissible methods of taking.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Permissible methods of taking. 216.242...) § 216.242 Permissible methods of taking. (a) Under Letters of Authorization issued pursuant to §§ 216... species, by the identified method of take and the indicated number of times: (1) Level B Harassment (±10...
50 CFR 218.181 - Permissible methods of taking.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Permissible methods of taking. 218.181... Center Panama City Division § 218.181 Permissible methods of taking. (a) Under Letters of Authorization... activities identified in § 218.180(c) is limited to the following species, by the indicated method of take...
50 CFR 216.272 - Permissible methods of taking.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Permissible methods of taking. 216.272... (SOCAL Range Complex) § 216.272 Permissible methods of taking. (a) Under Letters of Authorization issued... species, by the indicated method of take and the indicated number of times: (1) Level B Harassment (±10...
50 CFR 218.2 - Permissible methods of taking.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Permissible methods of taking. 218.2... (VACAPES Range Complex) § 218.2 Permissible methods of taking. (a) Under Letters of Authorization issued... following species, by the indicated method of take and the indicated number of times: (1) Level B Harassment...
50 CFR 218.21 - Permissible methods of taking.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Permissible methods of taking. 218.21... Permissible methods of taking. (a) Under Letters of Authorization issued pursuant to §§ 216.106 of this... species, by the indicated method of take and the indicated number of times: (1) Level B Harassment: (i...
50 CFR 218.112 - Permissible methods of taking.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Permissible methods of taking. 218.112....112 Permissible methods of taking. (a) Under Letters of Authorization issued pursuant to §§ 216.106...) and (5) of this section by the indicated method of take and the indicated number of times (estimated...
50 CFR 218.102 - Permissible methods of taking.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Permissible methods of taking. 218.102...) § 218.102 Permissible methods of taking. (a) Under Letters of Authorization issued pursuant to §§ 216... the indicated method of take and the indicated number of times (estimated based on the authorized...
50 CFR 218.31 - Permissible methods of taking.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Permissible methods of taking. 218.31....31 Permissible methods of taking. (a) Under Letters of Authorization issued pursuant to §§ 216.106 of... method of take and the indicated number of times: (1) Level B Harassment: (i) Sperm whale (Physeter...
50 CFR 216.272 - Permissible methods of taking.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 10 2012-10-01 2012-10-01 false Permissible methods of taking. 216.272... (SOCAL Range Complex) § 216.272 Permissible methods of taking. (a) Under Letters of Authorization issued... species, by the indicated method of take and the indicated number of times: (1) Level B Harassment (±10...
50 CFR 218.122 - Permissible methods of taking.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Permissible methods of taking. 218.122...) § 218.122 Permissible methods of taking. (a) Under Letters of Authorization issued pursuant to §§ 216... indicated method of take and the indicated number of times (estimated based on the authorized amounts of...
Inverse models: A necessary next step in ground-water modeling
Poeter, E.P.; Hill, M.C.
1997-01-01
Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares repression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.
NASA Technical Reports Server (NTRS)
Hubeny, I.; Lanz, T.
1995-01-01
A new munerical method for computing non-Local Thermodynamic Equilibrium (non-LTE) model stellar atmospheres is presented. The method, called the hybird complete linearization/accelerated lambda iretation (CL/ALI) method, combines advantages of both its constituents. Its rate of convergence is virtually as high as for the standard CL method, while the computer time per iteration is almost as low as for the standard ALI method. The method is formulated as the standard complete lineariation, the only difference being that the radiation intensity at selected frequency points is not explicity linearized; instead, it is treated by means of the ALI approach. The scheme offers a wide spectrum of options, ranging from the full CL to the full ALI method. We deonstrate that the method works optimally if the majority of frequency points are treated in the ALI mode, while the radiation intensity at a few (typically two to 30) frequency points is explicity linearized. We show how this method can be applied to calculate metal line-blanketed non-LTE model atmospheres, by using the idea of 'superlevels' and 'superlines' introduced originally by Anderson (1989). We calculate several illustrative models taking into accont several tens of thosands of lines of Fe III to Fe IV and show that the hybrid CL/ALI method provides a robust method for calculating non-LTE line-blanketed model atmospheres for a wide range of stellar parameters. The results for individual stellar types will be presented in subsequent papers in this series.
ERIC Educational Resources Information Center
Philadelphia Youth Network, 2006
2006-01-01
The title of this year's annual report has particular meaning for all of the staff at the Philadelphia Youth Network. The phrase derives from Philadelphia Youth Network's (PYN's) new vision statement, developed as part of its recent strategic planning process, which reads: All of our city's young people take their rightful places as full and…
Transfer of contextual cueing in full-icon display remapping.
Shi, Zhuanghua; Zang, Xuelian; Jia, Lina; Geyer, Thomas; Müller, Hermann J
2013-02-25
Invariant spatial context can expedite visual search, an effect that is known as contextual cueing (e.g., Chun & Jiang, 1998). However, disrupting learned display configurations abolishes the effect. In current touch-based mobile devices, such as the iPad, icons are shuffled and remapped when the display mode is changed. However, such remapping also disrupts the spatial relationships between icons. This may hamper usability. In the present study, we examined the transfer of contextual cueing in four different methods of display remapping: position-order invariant, global rotation, local invariant, and central invariant. We used full-icon landscape mode for training and both landscape and portrait modes for testing, to check whether the cueing transfers to portrait mode. The results showed transfer of contextual cueing but only with the local invariant and the central invariant remapping methods. We take the results to mean that the predictability of target locations is a crucial factor for the transfer of contextual cueing and thus icon remapping design for mobile devices.
Non-Thermal Spectra from Pulsar Magnetospheres in the Full Electromagnetic Cascade Scenario
NASA Astrophysics Data System (ADS)
Peng, Qi-Yong; Zhang, Li
2008-08-01
We simulated non-thermal emission from a pulsar magnetosphere within the framework of a full polar-cap cascade scenario by taking the acceleration gap into account, using the Monte Carlo method. For a given electric field parallel to open field lines located at some height above the surface of a neutron star, primary electrons were accelerated by parallel electric fields and lost their energies by curvature radiation; these photons were converted to electron-positron pairs, which emitted photons through subsequent quantum synchrotron radiation and inverse Compton scattering, leading to a cascade. In our calculations, the acceleration gap was assumed to be high above the stellar surface (about several stellar radii); the primary and secondary particles and photons emitted during the journey of those particles in the magnetosphere were traced using the Monte Carlo method. In such a scenario, we calculated the non-thermal photon spectra for different pulsar parameters and compared the model results for two normal pulsars and one millisecond pulsar with the observed data.
50 CFR 216.172 - Permissible methods of taking.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 10 2012-10-01 2012-10-01 false Permissible methods of taking. 216.172... Permissible methods of taking. (a) Under Letters of Authorization issued pursuant to §§ 216.106 and 216.177... indicated method of take and the indicated number of times: (1) Level B Harassment (±10 percent of the...
50 CFR 218.232 - Permissible methods of taking.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Permissible methods of taking. 218.232... Low Frequency Active (SURTASS LFA) Sonar § 218.232 Permissible methods of taking. (a) Under Letters of... species listed in § 218.230(b) by the method of take indicated in paragraphs (c)(2) through (5) of this...
50 CFR 216.172 - Permissible methods of taking.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Permissible methods of taking. 216.172... Permissible methods of taking. (a) Under Letters of Authorization issued pursuant to §§ 216.106 and 216.177... indicated method of take and the indicated number of times: (1) Level B Harassment (±10 percent of the...
An Improved Method of AGM for High Precision Geolocation of SAR Images
NASA Astrophysics Data System (ADS)
Zhou, G.; He, C.; Yue, T.; Huang, W.; Huang, Y.; Li, X.; Chen, Y.
2018-05-01
In order to take full advantage of SAR images, it is necessary to obtain the high precision location of the image. During the geometric correction process of images, to ensure the accuracy of image geometric correction and extract the effective mapping information from the images, precise image geolocation is important. This paper presents an improved analytical geolocation method (IAGM) that determine the high precision geolocation of each pixel in a digital SAR image. This method is based on analytical geolocation method (AGM) proposed by X. K. Yuan aiming at realizing the solution of RD model. Tests will be conducted using RADARSAT-2 SAR image. Comparing the predicted feature geolocation with the position as determined by high precision orthophoto, results indicate an accuracy of 50m is attainable with this method. Error sources will be analyzed and some recommendations about improving image location accuracy in future spaceborne SAR's will be given.
Unsupervised Feature Learning With Winner-Takes-All Based STDP
Ferré, Paul; Mamalet, Franck; Thorpe, Simon J.
2018-01-01
We present a novel strategy for unsupervised feature learning in image applications inspired by the Spike-Timing-Dependent-Plasticity (STDP) biological learning rule. We show equivalence between rank order coding Leaky-Integrate-and-Fire neurons and ReLU artificial neurons when applied to non-temporal data. We apply this to images using rank-order coding, which allows us to perform a full network simulation with a single feed-forward pass using GPU hardware. Next we introduce a binary STDP learning rule compatible with training on batches of images. Two mechanisms to stabilize the training are also presented : a Winner-Takes-All (WTA) framework which selects the most relevant patches to learn from along the spatial dimensions, and a simple feature-wise normalization as homeostatic process. This learning process allows us to train multi-layer architectures of convolutional sparse features. We apply our method to extract features from the MNIST, ETH80, CIFAR-10, and STL-10 datasets and show that these features are relevant for classification. We finally compare these results with several other state of the art unsupervised learning methods. PMID:29674961
Brzezicki, Samuel J.
2017-01-01
An analytical method to find the flow generated by the basic singularities of Stokes flow in a wedge of arbitrary angle is presented. Specifically, we solve a biharmonic equation for the stream function of the flow generated by a point stresslet singularity and satisfying no-slip boundary conditions on the two walls of the wedge. The method, which is readily adapted to any other singularity type, takes full account of any transcendental singularities arising at the corner of the wedge. The approach is also applicable to problems of plane strain/stress of an elastic solid where the biharmonic equation also governs the Airy stress function. PMID:28690412
Crowdy, Darren G; Brzezicki, Samuel J
2017-06-01
An analytical method to find the flow generated by the basic singularities of Stokes flow in a wedge of arbitrary angle is presented. Specifically, we solve a biharmonic equation for the stream function of the flow generated by a point stresslet singularity and satisfying no-slip boundary conditions on the two walls of the wedge. The method, which is readily adapted to any other singularity type, takes full account of any transcendental singularities arising at the corner of the wedge. The approach is also applicable to problems of plane strain/stress of an elastic solid where the biharmonic equation also governs the Airy stress function.
NASA Astrophysics Data System (ADS)
Neradilová, Hana; Fedorko, Gabriel
2016-12-01
Automated logistic systems are becoming more widely used within enterprise logistics processes. Their main advantage is that they allow increasing the efficiency and reliability of logistics processes. In terms of evaluating their effectiveness, it is necessary to take into account the economic aspect of the entire process. However, many users ignore and underestimate this area,which is not correct. One of the reasons why the economic aspect is overlooked is the fact that obtaining information for such an analysis is not easy. The aim of this paper is to present the possibilities of computer simulation methods for obtaining data for full-scale economic analysis implementation.
Near-edge X-ray refraction fine structure microscopy
Farmand, Maryam; Celestre, Richard; Denes, Peter; ...
2017-02-06
We demonstrate a method for obtaining increased spatial resolution and specificity in nanoscale chemical composition maps through the use of full refractive reference spectra in soft x-ray spectro-microscopy. Using soft x-ray ptychography, we measure both the absorption and refraction of x-rays through pristine reference materials as a function of photon energy and use these reference spectra as the basis for decomposing spatially resolved spectra from a heterogeneous sample, thereby quantifying the composition at high resolution. While conventional instruments are limited to absorption contrast, our novel refraction based method takes advantage of the strongly energy dependent scattering cross-section and can seemore » nearly five-fold improved spatial resolution on resonance.« less
Taking Advantage of Student Engagement Results in Student Affairs
ERIC Educational Resources Information Center
Kinzie, Jillian; Hurtado, Sarah S.
2017-01-01
This chapter urges student affairs professionals committed to enhancing student success through data-informed decision making to take full advantage of opportunities to apply and use student engagement results.
Gamma-ray Full Spectrum Analysis for Environmental Radioactivity by HPGe Detector
NASA Astrophysics Data System (ADS)
Jeong, Meeyoung; Lee, Kyeong Beom; Kim, Kyeong Ja; Lee, Min-Kie; Han, Ju-Bong
2014-12-01
Odyssey, one of the NASA¡¯s Mars exploration program and SELENE (Kaguya), a Japanese lunar orbiting spacecraft have a payload of Gamma-Ray Spectrometer (GRS) for analyzing radioactive chemical elements of the atmosphere and the surface. In these days, gamma-ray spectroscopy with a High-Purity Germanium (HPGe) detector has been widely used for the activity measurements of natural radionuclides contained in the soil of the Earth. The energy spectra obtained by the HPGe detectors have been generally analyzed by means of the Window Analysis (WA) method. In this method, activity concentrations are determined by using the net counts of energy window around individual peaks. Meanwhile, an alternative method, the so-called Full Spectrum Analysis (FSA) method uses count numbers not only from full-absorption peaks but from the contributions of Compton scattering due to gamma-rays. Consequently, while it takes a substantial time to obtain a statistically significant result in the WA method, the FSA method requires a much shorter time to reach the same level of the statistical significance. This study shows the validation results of FSA method. We have compared the concentration of radioactivity of 40K, 232Th and 238U in the soil measured by the WA method and the FSA method, respectively. The gamma-ray spectrum of reference materials (RGU and RGTh, KCl) and soil samples were measured by the 120% HPGe detector with cosmic muon veto detector. According to the comparison result of activity concentrations between the FSA and the WA, we could conclude that FSA method is validated against the WA method. This study implies that the FSA method can be used in a harsh measurement environment, such as the gamma-ray measurement in the Moon, in which the level of statistical significance is usually required in a much shorter data acquisition time than the WA method.
Aircraft Engine Noise Scattering - A Discontinuous Spectral Element Approach
NASA Technical Reports Server (NTRS)
Stanescu, D.; Hussaini, M. Y.; Farassat, F.
2002-01-01
The paper presents a time-domain method for computation of sound radiation from aircraft engine sources to the far-field. The effects of nonuniform flow around the aircraft and scattering of sound by fuselage and wings are accounted for in the formulation. Our approach is based on the discretization of the inviscid flow equations through a collocation form of the Discontinuous Galerkin spectral element method. An isoparametric representation of the underlying geometry is used in order to take full advantage of the spectral accuracy of the method. Largescale computations are made possible by a parallel implementation based on message passing. Results obtained for radiation from an axisymmetric nacelle alone are compared with those obtained when the same nacelle is installed in a generic con.guration, with and without a wing.
Aircraft Engine Noise Scattering By Fuselage and Wings: A Computational Approach
NASA Technical Reports Server (NTRS)
Stanescu, D.; Hussaini, M. Y.; Farassat, F.
2003-01-01
The paper presents a time-domain method for computation of sound radiation from aircraft engine sources to the far-field. The effects of nonuniform flow around the aircraft and scattering of sound by fuselage and wings are accounted for in the formulation. The approach is based on the discretization of the inviscid flow equations through a collocation form of the Discontinuous Galerkin spectral element method. An isoparametric representation of the underlying geometry is used in order to take full advantage of the spectral accuracy of the method. Large-scale computations are made possible by a parallel implementation based on message passing. Results obtained for radiation from an axisymmetric nacelle alone are compared with those obtained when the same nacelle is installed in a generic configuration, with and without a wing.
Aircraft Engine Noise Scattering by Fuselage and Wings: A Computational Approach
NASA Technical Reports Server (NTRS)
Stanescu, D.; Hussaini, M. Y.; Farassat, F.
2003-01-01
The paper presents a time-domain method for computation of sound radiation from aircraft engine sources to the far-field. The effects of nonuniform flow around the aircraft and scattering of sound by fuselage and wings are accounted for in the formulation. The approach is based on the discretization of the inviscid flow equations through a collocation form of the Discontinuous Galerkin spectral element method. An isoparametric representation of the underlying geometry is used in order to take full advantage of the spectral accuracy of the method. Large-scale computations are made possible by a parallel implementation based on message passing. Results obtained for radiation from an axisymmetric nacelle alone are compared with those obtained when the same nacelle is installed in a generic configuration, with and without a wing.
Use of Information--LMC Connection
ERIC Educational Resources Information Center
Darrow, Rob
2005-01-01
Note taking plays an important part in the correct extracting of information from reference sources. The "Cornell Note Taking Method" initially developed as a method of taking notes during a lecture is well suited for taking notes from print sources and is one of the best "Use of Information" methods.
Hu, Zhen-Hua; Huang, Teng; Wang, Ying-Ping; Ding, Lei; Zheng, Hai-Yang; Fang, Li
2011-06-01
Taking solar source as radiation in the near-infrared high-resolution absorption spectrum is widely used in remote sensing of atmospheric parameters. The present paper will take retrieval of the concentration of CO2 for example, and study the effect of solar spectra resolution. Retrieving concentrations of CO2 by using high resolution absorption spectra, a method which uses the program provided by AER to calculate the solar spectra at the top of atmosphere as radiation and combine with the HRATS (high resolution atmospheric transmission simulation) to simulate retrieving concentration of CO2. Numerical simulation shows that the accuracy of solar spectrum is important to retrieval, especially in the hyper-resolution spectral retrieavl, and the error of retrieval concentration has poor linear relation with the resolution of observation, but there is a tendency that the decrease in the resolution requires low resolution of solar spectrum. In order to retrieve the concentration of CO2 of atmosphere, the authors' should take full advantage of high-resolution solar spectrum at the top of atmosphere.
NASA Astrophysics Data System (ADS)
Boccara, A. Claude; Fedala, Yasmina; Voronkoff, Justine; Paffoni, Nina; Boccara, Martine
2017-03-01
Due to the huge abundance and the major role that viruses and membrane vesicles play in the seas or rivers ecosystems it is necessary to develop simple, sensitive, compact and reliable methods for their detection and characterization. Our approach is based on the measurement of the weak light level scattered by the biotic nanoparticles. We describe a new full-field, incoherently illuminated, shot-noise limited, common-path interferometric detection method coupled with the analysis of Brownian motion to detect, quantify, and differentiate biotic nanoparticles. The last developments take advantage of a new fast (700 Hz) camera with 2 Me- full well capacity that improves the signal to noise ratio and increases the precision of the Brownian motion characterization. We validated the method with calibrated nanoparticles and homogeneous DNA or RNA.viruses. The smallest virus size that we characterized with a suitable signal-to-noise ratio was around 30 nm in diameter with a target towards the numerous 20 nm diameter viruses. We show for the first time anisotropic trajectories for myoviruses meaning that there is a memory of the initial direction of their Brownian motions. Significant improvements have been made in the handling of the sample as well as in the statistical analysis for differentiating the various families of vesicles and virus. We further applied the method for vesicles detection and for analysis of coastal and oligotrophic samples from Tara Oceans circumnavigation as well of various rivers.
NASA Astrophysics Data System (ADS)
Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin
2017-10-01
The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.
... HIV in the blood. Although fosamprenavir does not cure HIV, it may decrease your chance of developing acquired ... take another full dose of fosamprenavir.Fosamprenavir controls HIV infection but does not cure it. Continue to take fosamprenavir even if you ...
An Experimental and Theoretical Study of Nitrogen-Broadened Acetylene Lines
NASA Technical Reports Server (NTRS)
Thibault, Franck; Martinez, Raul Z.; Bermejo, Dionisio; Ivanov, Sergey V.; Buzykin, Oleg G.; Ma, Qiancheng
2014-01-01
We present experimental nitrogen-broadening coefficients derived from Voigt profiles of isotropic Raman Q-lines measured in the 2 band of acetylene (C2H2) at 150 K and 298 K, and compare them to theoretical values obtained through calculations that were carried out specifically for this work. Namely, full classical calculations based on Gordon's approach, two kinds of semi-classical calculations based on Robert Bonamy method as well as full quantum dynamical calculations were performed. All the computations employed exactly the same ab initio potential energy surface for the C2H2N2 system which is, to our knowledge, the most realistic, accurate and up-to-date one. The resulting calculated collisional half-widths are in good agreement with the experimental ones only for the full classical and quantum dynamical methods. In addition, we have performed similar calculations for IR absorption lines and compared the results to bibliographic values. Results obtained with the full classical method are again in good agreement with the available room temperature experimental data. The quantum dynamical close-coupling calculations are too time consuming to provide a complete set of values and therefore have been performed only for the R(0) line of C2H2. The broadening coefficient obtained for this line at 173 K and 297 K also compares quite well with the available experimental data. The traditional Robert Bonamy semi-classical formalism, however, strongly overestimates the values of half-width for both Qand R-lines. The refined semi-classical Robert Bonamy method, first proposed for the calculations of pressure broadening coefficients of isotropic Raman lines, is also used for IR lines. By using this improved model that takes into account effects from line coupling, the calculated semi-classical widths are significantly reduced and closer to the measured ones.
Transport equations for partially ionized reactive plasma in magnetic field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhdanov, V. M.; Stepanenko, A. A.
2016-06-08
Transport equations for partially ionized reactive plasma in magnetic field taking into account the internal degrees of freedom and electronic excitation of plasma particles are derived. As a starting point of analysis the kinetic equation with a binary collision operator written in the Wang-Chang and Uhlenbeck form and with a reactive collision integral allowing for arbitrary chemical reactions is used. The linearized variant of Grad’s moment method is applied to deduce the systems of moment equations for plasma and also full and reduced transport equations for plasma species nonequilibrium parameters.
A Massively Parallel Code for Polarization Calculations
NASA Astrophysics Data System (ADS)
Akiyama, Shizuka; Höflich, Peter
2001-03-01
We present an implementation of our Monte-Carlo radiation transport method for rapidly expanding, NLTE atmospheres for massively parallel computers which utilizes both the distributed and shared memory models. This allows us to take full advantage of the fast communication and low latency inherent to nodes with multiple CPUs, and to stretch the limits of scalability with the number of nodes compared to a version which is based on the shared memory model. Test calculations on a local 20-node Beowulf cluster with dual CPUs showed an improved scalability by about 40%.
Full moment tensor and source location inversion based on full waveform adjoint method
NASA Astrophysics Data System (ADS)
Morency, C.
2012-12-01
The development of high-performance computing and numerical techniques enabled global and regional tomography to reach high levels of precision, and seismic adjoint tomography has become a state-of-the-art tomographic technique. The method was successfully used for crustal tomography of Southern California (Tape et al., 2009) and Europe (Zhu et al., 2012). Here, I will focus on the determination of source parameters (full moment tensor and location) based on the same approach (Kim et al, 2011). The method relies on full wave simulations and takes advantage of the misfit between observed and synthetic seismograms. An adjoint wavefield is calculated by back-propagating the difference between observed and synthetics from the receivers to the source. The interaction between this adjoint wavefield and the regular forward wavefield helps define Frechet derivatives of the source parameters, that is, the sensitivity of the misfit with respect to the source parameters. Source parameters are then recovered by minimizing the misfit based on a conjugate gradient algorithm using the Frechet derivatives. First, I will demonstrate the method on synthetic cases before tackling events recorded at the Geysers. The velocity model used at the Geysers is based on the USGS 3D velocity model. Waveform datasets come from the Northern California Earthquake Data Center. Finally, I will discuss strategies to ultimately use this method to characterize smaller events for microseismic and induced seismicity monitoring. References: - Tape, C., Q. Liu, A. Maggi, and J. Tromp, 2009, Adjoint tomography of the Southern California crust: Science, 325, 988992. - Zhu, H., Bozdag, E., Peter, D., and Tromp, J., 2012, Structure of the European upper mantle revealed by adjoint method: Nature Geoscience, 5, 493-498. - Kim, Y., Q. Liu, and J. Tromp, 2011, Adjoint centroid-moment tensor inversions: Geophys. J. Int., 186, 264278. Prepared by LLNL under Contract DE-AC52-07NA27344.
On HMI's Mod-L Sequence: Test and Evaluation
NASA Astrophysics Data System (ADS)
Liu, Yang; Baldner, Charles; Bogart, R. S.; Bush, R.; Couvidat, S.; Duvall, Thomas L.; Hoeksema, Jon Todd; Norton, Aimee Ann; Scherrer, Philip H.; Schou, Jesper
2016-05-01
HMI Mod-L sequence can produce full Stokes parameters at a cadence of 90 seconds by combining filtergrams from both cameras, the front camera and the side camera. Within the 90-second, the front camera takes two sets of Left and Right Circular Polarizations (LCP and RCP) at 6 wavelengths; the side camera takes one set of Linear Polarizations (I+/-Q and I+/-U) at 6 wavelengths. By combining two cameras, one can obtain full Stokes parameters of [I,Q,U,V] at 6 wavelengths in 90 seconds. In norminal Mod-C sequence that HMI currently uses, the front camera takes LCP and RCP at a cadence of 45 seconds, while the side camera takes observation of the full Stokes at a cadence of 135 seconds. Mod-L should be better than Mod-C for providing vector magnetic field data because (1) Mod-L increases cadence of full Stokes observation, which leads to higher temporal resolution of vector magnetic field measurement; (2) decreases noise in vector magnetic field data because it uses more filtergrams to produce [I,Q,U,V]. There are two potential issues in Mod-L that need to be addressed: (1) scaling intensity of the two cameras’ filtergrams; and (2) if current polarization calibration model, which is built for each camera separately, works for the combined data from both cameras. This presentation will address these questions, and further place a discussion here.
Constructing the effect of alternative intervention strategies on historic epidemics.
Cook, A R; Gibson, G J; Gottwald, T R; Gilligan, C A
2008-10-06
Data from historical epidemics provide a vital and sometimes under-used resource from which to devise strategies for future control of disease. Previous methods for retrospective analysis of epidemics, in which alternative interventions are compared, do not make full use of the information; by using only partial information on the historical trajectory, augmentation of control may lead to predictions of a paradoxical increase in disease. Here we introduce a novel statistical approach that takes full account of the available information in constructing the effect of alternative intervention strategies in historic epidemics. The key to the method lies in identifying a suitable mapping between the historic and notional outbreaks, under alternative control strategies. We do this by using the Sellke construction as a latent process linking epidemics. We illustrate the application of the method with two examples. First, using temporal data for the common human cold, we show the improvement under the new method in the precision of predictions for different control strategies. Second, we show the generality of the method for retrospective analysis of epidemics by applying it to a spatially extended arboreal epidemic in which we demonstrate the relative effectiveness of host culling strategies that differ in frequency and spatial extent. Some of the inferential and philosophical issues that arise are discussed along with the scope of potential application of the new method.
Design of a rear anamorphic attachment for digital cinematography
NASA Astrophysics Data System (ADS)
Cifuentes, A.; Valles, A.
2008-09-01
Digital taking systems for HDTV and now for the film industry present a particularly challenging design problem for rear adapters in general. The thick 3-channel prism block in the camera provides an important challenge in the design. In this paper the design of a 1.33x rear anamorphic attachment is presented. The new design departs significantly from the traditional Bravais condition due to the thick dichroic prism block. Design strategies for non-rotationally symmetric systems and fields of view are discussed. Anamorphic images intrinsically have a lower contrast and less resolution than their rotationally symmetric counterparts, therefore proper image evaluation must be considered. The interpretation of the traditional image quality methods applied to anamorphic images is also discussed in relation to the design process. The final design has a total track less than 50 mm, maintaining the telecentricity of the digital prime lens and taking full advantage of the f/1.4 prism block.
Affecting the value chain through supplier kaizen.
Forman, C R; Vargas, D H
1999-02-01
In the aerospace industry, typically 60 percent of a product's cost and 70 percent of the lead time are due to purchased material. To affect price and customer responsiveness, improvement initiatives must be extended into the supply chain. Many companies have developed supply base management systems that include long-term agreements with suppliers, partnering with suppliers in risk taking and product design, information sharing, and quality and delivery rating systems. The premise is that suppliers are an extension of the factory. But to take full advantage of customer-supplier relationships, the suppliers must be "developed" in the same manner as a manufacturing unit. Supplier kaizen is a method of bringing suppliers to the same level of operations as the parent company, through training and improvement projects, to ensure superior performance and nurture the trust that is required for strong partnerships. This article describes Sikorsky Aircraft's use of kaizen to improve its supply base management.
NASA Astrophysics Data System (ADS)
Mir, Raja N.; Frensley, William R.
2013-10-01
InAs-Sb/GaSb type-II strain compensated superlattices (SLS) are currently being used in mid-wave and long-wave infrared photodetectors. The electronic bandstructure of InSb and GaSb shows very strong anisotropy and non-parabolicity close to the Γ-point for the conduction band (CB) minimum and the valence band (VB) maximum. Particularly around the energy range of 45-80 meV from band-edge we observe strong non-parabolicity in the CB and light hole VB. The band-edge dispersion determines the electrical properties of a material. When the bulk materials are combined to form a superlattice we need a model of bandstructure which takes into account the full bandstructure details of the constituents and also the strong interaction between the conduction band of InAs and valence bands of GaSb. There can also be contact potentials near the interface between two dissimilar superlattices which will not be captured unless a full bandstructure calculation is done. In this study, we have done a calculation using second nearest neighbor tight binding model in order to accurately reproduce the effective masses. The calculation of mini-band structure is done by finding the wavefunctions within one SL period subject to Bloch boundary conditions ψ(L)=ψ(0)eikL. We demonstrate in this paper how a calculation of carrier concentration as a function of the position of the Fermi level (EF) within bandgap(Eg) should be done in order to take into account the full bandstructure of broken-bandgap material systems. This calculation is key for determining electron transport particularly when we have an interface between two dissimilar superlattices.
Optimization Research on Ampacity of Underground High Voltage Cable Based on Interior Point Method
NASA Astrophysics Data System (ADS)
Huang, Feng; Li, Jing
2017-12-01
The conservative operation method which takes unified current-carrying capacity as maximum load current can’t make full use of the overall power transmission capacity of the cable. It’s not the optimal operation state for the cable cluster. In order to improve the transmission capacity of underground cables in cluster, this paper regards the maximum overall load current as the objective function and the temperature of any cables lower than maximum permissible temperature as constraint condition. The interior point method which is very effective for nonlinear problem is put forward to solve the extreme value of the problem and determine the optimal operating current of each loop. The results show that the optimal solutions obtained with the purposed method is able to increase the total load current about 5%. It greatly improves the economic performance of the cable cluster.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-29
... species or stock(s) for subsistence uses (where relevant). Further, the permissible methods of taking and... Thresholds During Pile Installation Distance Area (sq. Pile type Method Threshold (m)\\1\\ km)\\2\\ Steel (sheet... methods of taking pursuant to such activity, and other means of effecting the least practicable impact on...
... by slowing activity in the brain to allow sleep. ... full night after you take the medication.Your sleep problems should improve within 7 to 10 days ... start taking estazolam. Call your doctor if your sleep problems do not improve during this time, if ...
Defect induced guided waves mode conversion
NASA Astrophysics Data System (ADS)
Wandowski, Tomasz; Kudela, Pawel; Malinowski, Pawel; Ostachowicz, Wieslaw
2016-04-01
This paper deals with analysis of guided waves mode conversion phenomenon in fiber reinforced composite materials. Mode conversion phenomenon may take place when propagating elastic guided waves interact with discontinuities in the composite waveguide. The examples of such discontinuities are sudden thickness change or delamination between layers in composite material. In this paper, analysis of mode conversion phenomenon is based on full wave-field signals. In the full wave-field approach signals representing propagation of elastic waves are gathered from dense mesh of points that span over investigated area of composite part. This allow to animate the guided wave propagation. The reported analysis is based on signals resulting from numerical calculations and experimental measurements. In both cases defect in the form of delamination is considered. In the case of numerical research, Spectral Element Method (SEM) is utilized, in which a mesh is composed of 3D elements. Numerical model includes also piezoelectric transducer. Full wave-field experimental measurements are conducted by using piezoelectric transducer for guided wave excitation and Scanning Laser Doppler Vibrometer (SLDV) for sensing.
A two-step FEM-SEM approach for wave propagation analysis in cable structures
NASA Astrophysics Data System (ADS)
Zhang, Songhan; Shen, Ruili; Wang, Tao; De Roeck, Guido; Lombaert, Geert
2018-02-01
Vibration-based methods are among the most widely studied in structural health monitoring (SHM). It is well known, however, that the low-order modes, characterizing the global dynamic behaviour of structures, are relatively insensitive to local damage. Such local damage may be easier to detect by methods based on wave propagation which involve local high frequency behaviour. The present work considers the numerical analysis of wave propagation in cables. A two-step approach is proposed which allows taking into account the cable sag and the distribution of the axial forces in the wave propagation analysis. In the first step, the static deformation and internal forces are obtained by the finite element method (FEM), taking into account geometric nonlinear effects. In the second step, the results from the static analysis are used to define the initial state of the dynamic analysis which is performed by means of the spectral element method (SEM). The use of the SEM in the second step of the analysis allows for a significant reduction in computational costs as compared to a FE analysis. This methodology is first verified by means of a full FE analysis for a single stretched cable. Next, simulations are made to study the effects of damage in a single stretched cable and a cable-supported truss. The results of the simulations show how damage significantly affects the high frequency response, confirming the potential of wave propagation based methods for SHM.
Simultaneous Measurement of Thermal Conductivity and Specific Heat in a Single TDTR Experiment
NASA Astrophysics Data System (ADS)
Sun, Fangyuan; Wang, Xinwei; Yang, Ming; Chen, Zhe; Zhang, Hang; Tang, Dawei
2018-01-01
Time-domain thermoreflectance (TDTR) technique is a powerful thermal property measurement method, especially for nano-structures and material interfaces. Thermal properties can be obtained by fitting TDTR experimental data with a proper thermal transport model. In a single TDTR experiment, thermal properties with different sensitivity trends can be extracted simultaneously. However, thermal conductivity and volumetric heat capacity usually have similar trends in sensitivity for most materials; it is difficult to measure them simultaneously. In this work, we present a two-step data fitting method to measure the thermal conductivity and volumetric heat capacity simultaneously from a set of TDTR experimental data at single modulation frequency. This method takes full advantage of the information carried by both amplitude and phase signals; it is a more convenient and effective solution compared with the frequency-domain thermoreflectance method. The relative error is lower than 5 % for most cases. A silicon wafer sample was measured by TDTR method to verify the two-step fitting method.
Multi-criteria evaluation methods in the production scheduling
NASA Astrophysics Data System (ADS)
Kalinowski, K.; Krenczyk, D.; Paprocka, I.; Kempa, W.; Grabowik, C.
2016-08-01
The paper presents a discussion on the practical application of different methods of multi-criteria evaluation in the process of scheduling in manufacturing systems. Among the methods two main groups are specified: methods based on the distance function (using metacriterion) and methods that create a Pareto set of possible solutions. The basic criteria used for scheduling were also described. The overall procedure of evaluation process in production scheduling was presented. It takes into account the actions in the whole scheduling process and human decision maker (HDM) participation. The specified HDM decisions are related to creating and editing a set of evaluation criteria, selection of multi-criteria evaluation method, interaction in the searching process, using informal criteria and making final changes in the schedule for implementation. According to need, process scheduling may be completely or partially automated. Full automatization is possible in case of metacriterion based objective function and if Pareto set is selected - the final decision has to be done by HDM.
Phase modulation due to crystal diffraction by ptychographic imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Civita, M.; Diaz, A.; Bean, R. J.
Solving the phase problem in x-ray crystallography has occupied a considerable scientific effort in the 20th century and led to great advances in structural science. Here we use x-ray ptychography to demonstrate an interference method which measures the phase of the beam transmitted through a crystal, relative to the incoming beam, when diffraction takes place. The observed phase change of the direct beam through a small gold crystal is found to agree with both a quasikinematical model and full dynamical theories of diffraction. Our discovery of a diffraction contrast mechanism will enhance the interpretation of data obtained from crystalline samplesmore » using the ptychography method, which provides some of the most accurate x-ray phase-contrast images.« less
NASA Astrophysics Data System (ADS)
Rountree, S. Derek
2013-04-01
The Low-Energy Neutrino Spectrometer (LENS) prototyping program is broken into two phases. The first of these is μLENS, a small prototype to study the light transmission in the as built LENS scintillation lattice--- a novel detector method of high segmentation in a large liquid scintillation detector. The μLENS prototype is currently deployed and taking data at the Kimballton Underground Research Facility (KURF) near Virginia Tech. I will discuss the Scintillation Lattice construction methods and schemes of the μLENS program for running with minimal channels instrumented to date ˜41 compared to full coverage 216). The second phase of prototyping is the miniLENS detector for which construction is under way. I will discuss the overall design from the miniLENS Scintillation Lattice to the shielding.
Phase modulation due to crystal diffraction by ptychographic imaging
Civita, M.; Diaz, A.; Bean, R. J.; ...
2018-03-06
Solving the phase problem in x-ray crystallography has occupied a considerable scientific effort in the 20th century and led to great advances in structural science. Here we use x-ray ptychography to demonstrate an interference method which measures the phase of the beam transmitted through a crystal, relative to the incoming beam, when diffraction takes place. The observed phase change of the direct beam through a small gold crystal is found to agree with both a quasikinematical model and full dynamical theories of diffraction. Our discovery of a diffraction contrast mechanism will enhance the interpretation of data obtained from crystalline samplesmore » using the ptychography method, which provides some of the most accurate x-ray phase-contrast images.« less
Phase modulation due to crystal diffraction by ptychographic imaging
NASA Astrophysics Data System (ADS)
Civita, M.; Diaz, A.; Bean, R. J.; Shabalin, A. G.; Gorobtsov, O. Yu.; Vartanyants, I. A.; Robinson, I. K.
2018-03-01
Solving the phase problem in x-ray crystallography has occupied a considerable scientific effort in the 20th century and led to great advances in structural science. Here we use x-ray ptychography to demonstrate an interference method which measures the phase of the beam transmitted through a crystal, relative to the incoming beam, when diffraction takes place. The observed phase change of the direct beam through a small gold crystal is found to agree with both a quasikinematical model and full dynamical theories of diffraction. Our discovery of a diffraction contrast mechanism will enhance the interpretation of data obtained from crystalline samples using the ptychography method, which provides some of the most accurate x-ray phase-contrast images.
Bridges, John Fp
2006-02-01
Evidence based medicine is not only important for clinical practice, but national governments have embraced it through health technology assessment (HTA). HTA combines data from randomized controlled trials (RCT) and observational studies with an economic component (among other issues). HTA, however, is not taking full advantage of economics. This paper presents five areas in which economics may improve not only HTA, but the RCT methods that underpin it. HTA needs to live up to its original agenda of being a interdisciplinary field and draw methods not just from biostatistics, but from a range of discipline, including economics. By focusing only on cost effectiveness analysis (CEA), however, we go nowhere close to fulfilling this potential.
Aircraft Engine Noise Scattering by Fuselage and Wings: A Computational Approach
NASA Technical Reports Server (NTRS)
Farassat, F.; Stanescu, D.; Hussaini, M. Y.
2003-01-01
The paper presents a time-domain method for computation of sound radiation from aircraft engine sources to the far field. The effects of non-uniform flow around the aircraft and scattering of sound by fuselage and wings are accounted for in the formulation. The approach is based on the discretization of the inviscid flow equations through a collocation form of the discontinuous Galerkin spectral element method. An isoparametric representation of the underlying geometry is used in order to take full advantage of the spectral accuracy of the method. Large-scale computations are made possible by a parallel implementation based on message passing. Results obtained for radiation from an axisymmetric nacelle alone are compared with those obtained when the same nacelle is installed in a generic configuration, with and without a wing. 0 2002 Elsevier Science Ltd. All rights reserved.
Estimating the effects of extreme weather on transportation infrastructure.
DOT National Transportation Integrated Search
2016-12-01
Climate change, already taking place, is expected to become more pronounced in the future. Current damage assessment models for extreme weather events, such as FEMAs Hazus, do not take the full impact to transportation systems into consideration. ...
Perturbative Gaussianizing transforms for cosmological fields
NASA Astrophysics Data System (ADS)
Hall, Alex; Mead, Alexander
2018-01-01
Constraints on cosmological parameters from large-scale structure have traditionally been obtained from two-point statistics. However, non-linear structure formation renders these statistics insufficient in capturing the full information content available, necessitating the measurement of higher order moments to recover information which would otherwise be lost. We construct quantities based on non-linear and non-local transformations of weakly non-Gaussian fields that Gaussianize the full multivariate distribution at a given order in perturbation theory. Our approach does not require a model of the fields themselves and takes as input only the first few polyspectra, which could be modelled or measured from simulations or data, making our method particularly suited to observables lacking a robust perturbative description such as the weak-lensing shear. We apply our method to simulated density fields, finding a significantly reduced bispectrum and an enhanced correlation with the initial field. We demonstrate that our method reconstructs a large proportion of the linear baryon acoustic oscillations, improving the information content over the raw field by 35 per cent. We apply the transform to toy 21 cm intensity maps, showing that our method still performs well in the presence of complications such as redshift-space distortions, beam smoothing, pixel noise and foreground subtraction. We discuss how this method might provide a route to constructing a perturbative model of the fully non-Gaussian multivariate likelihood function.
Phylogenomics of plant genomes: a methodology for genome-wide searches for orthologs in plants
Conte, Matthieu G; Gaillard, Sylvain; Droc, Gaetan; Perin, Christophe
2008-01-01
Background Gene ortholog identification is now a major objective for mining the increasing amount of sequence data generated by complete or partial genome sequencing projects. Comparative and functional genomics urgently need a method for ortholog detection to reduce gene function inference and to aid in the identification of conserved or divergent genetic pathways between several species. As gene functions change during evolution, reconstructing the evolutionary history of genes should be a more accurate way to differentiate orthologs from paralogs. Phylogenomics takes into account phylogenetic information from high-throughput genome annotation and is the most straightforward way to infer orthologs. However, procedures for automatic detection of orthologs are still scarce and suffer from several limitations. Results We developed a procedure for ortholog prediction between Oryza sativa and Arabidopsis thaliana. Firstly, we established an efficient method to cluster A. thaliana and O. sativa full proteomes into gene families. Then, we developed an optimized phylogenomics pipeline for ortholog inference. We validated the full procedure using test sets of orthologs and paralogs to demonstrate that our method outperforms pairwise methods for ortholog predictions. Conclusion Our procedure achieved a high level of accuracy in predicting ortholog and paralog relationships. Phylogenomic predictions for all validated gene families in both species were easily achieved and we can conclude that our methodology outperforms similarly based methods. PMID:18426584
Code of Federal Regulations, 2010 CFR
2010-07-01
... (apart from a full-time student's summer vacation), except that when a full-day school holiday occurs the... session for a student taking one or more courses during a summer or other vacation.) Whenever a full-time... 29 Labor 3 2010-07-01 2010-07-01 false Terms and conditions of employment under full-time student...
Code of Federal Regulations, 2011 CFR
2011-07-01
... (apart from a full-time student's summer vacation), except that when a full-day school holiday occurs the... session for a student taking one or more courses during a summer or other vacation.) Whenever a full-time... 29 Labor 3 2011-07-01 2011-07-01 false Terms and conditions of employment under full-time student...
On Some Separated Algorithms for Separable Nonlinear Least Squares Problems.
Gan, Min; Chen, C L Philip; Chen, Guang-Yong; Chen, Long
2017-10-03
For a class of nonlinear least squares problems, it is usually very beneficial to separate the variables into a linear and a nonlinear part and take full advantage of reliable linear least squares techniques. Consequently, the original problem is turned into a reduced problem which involves only nonlinear parameters. We consider in this paper four separated algorithms for such problems. The first one is the variable projection (VP) algorithm with full Jacobian matrix of Golub and Pereyra. The second and third ones are VP algorithms with simplified Jacobian matrices proposed by Kaufman and Ruano et al. respectively. The fourth one only uses the gradient of the reduced problem. Monte Carlo experiments are conducted to compare the performance of these four algorithms. From the results of the experiments, we find that: 1) the simplified Jacobian proposed by Ruano et al. is not a good choice for the VP algorithm; moreover, it may render the algorithm hard to converge; 2) the fourth algorithm perform moderately among these four algorithms; 3) the VP algorithm with the full Jacobian matrix perform more stable than that of the VP algorithm with Kuafman's simplified one; and 4) the combination of VP algorithm and Levenberg-Marquardt method is more effective than the combination of VP algorithm and Gauss-Newton method.
Westaby, James D; Lowe, J Krister
2005-09-01
Despite youths' susceptibility to social influence, little research has examined the extent to which social factors impact youths' risk-taking orientation and injury at work. Drawing on social influence and behavioral intention theories, this study hypothesized that perceived supervisory influence, coworker risk taking, and parental risk taking serve as key exogenous variables of risk-taking orientation at work. Risk-taking orientation was further hypothesized to serve as a direct predictor and full mediator of work injury. The effect of parental risk taking was also hypothesized to be mediated through global risk taking, which in turn was posited to predict risk-taking orientation at work. Longitudinal results from 2,542 adolescents working across a wide spectrum of jobs supported hypothesized linkages, although there was some evidence of partially mediated mechanisms. Coworker risk taking was a relatively strong predictor of youths' risk-taking orientation at work. Copyright 2005 APA, all rights reserved.
Classical nucleation theory in the phase-field crystal model
NASA Astrophysics Data System (ADS)
Jreidini, Paul; Kocher, Gabriel; Provatas, Nikolas
2018-04-01
A full understanding of polycrystalline materials requires studying the process of nucleation, a thermally activated phase transition that typically occurs at atomistic scales. The numerical modeling of this process is problematic for traditional numerical techniques: commonly used phase-field methods' resolution does not extend to the atomic scales at which nucleation takes places, while atomistic methods such as molecular dynamics are incapable of scaling to the mesoscale regime where late-stage growth and structure formation takes place following earlier nucleation. Consequently, it is of interest to examine nucleation in the more recently proposed phase-field crystal (PFC) model, which attempts to bridge the atomic and mesoscale regimes in microstructure simulations. In this work, we numerically calculate homogeneous liquid-to-solid nucleation rates and incubation times in the simplest version of the PFC model, for various parameter choices. We show that the model naturally exhibits qualitative agreement with the predictions of classical nucleation theory (CNT) despite a lack of some explicit atomistic features presumed in CNT. We also examine the early appearance of lattice structure in nucleating grains, finding disagreement with some basic assumptions of CNT. We then argue that a quantitatively correct nucleation theory for the PFC model would require extending CNT to a multivariable theory.
Classical nucleation theory in the phase-field crystal model.
Jreidini, Paul; Kocher, Gabriel; Provatas, Nikolas
2018-04-01
A full understanding of polycrystalline materials requires studying the process of nucleation, a thermally activated phase transition that typically occurs at atomistic scales. The numerical modeling of this process is problematic for traditional numerical techniques: commonly used phase-field methods' resolution does not extend to the atomic scales at which nucleation takes places, while atomistic methods such as molecular dynamics are incapable of scaling to the mesoscale regime where late-stage growth and structure formation takes place following earlier nucleation. Consequently, it is of interest to examine nucleation in the more recently proposed phase-field crystal (PFC) model, which attempts to bridge the atomic and mesoscale regimes in microstructure simulations. In this work, we numerically calculate homogeneous liquid-to-solid nucleation rates and incubation times in the simplest version of the PFC model, for various parameter choices. We show that the model naturally exhibits qualitative agreement with the predictions of classical nucleation theory (CNT) despite a lack of some explicit atomistic features presumed in CNT. We also examine the early appearance of lattice structure in nucleating grains, finding disagreement with some basic assumptions of CNT. We then argue that a quantitatively correct nucleation theory for the PFC model would require extending CNT to a multivariable theory.
NASA Technical Reports Server (NTRS)
Watson, Willie R.; Nark, Douglas M.; Nguyen, Duc T.; Tungkahotara, Siroj
2006-01-01
A finite element solution to the convected Helmholtz equation in a nonuniform flow is used to model the noise field within 3-D acoustically treated aero-engine nacelles. Options to select linear or cubic Hermite polynomial basis functions and isoparametric elements are included. However, the key feature of the method is a domain decomposition procedure that is based upon the inter-mixing of an iterative and a direct solve strategy for solving the discrete finite element equations. This procedure is optimized to take full advantage of sparsity and exploit the increased memory and parallel processing capability of modern computer architectures. Example computations are presented for the Langley Flow Impedance Test facility and a rectangular mapping of a full scale, generic aero-engine nacelle. The accuracy and parallel performance of this new solver are tested on both model problems using a supercomputer that contains hundreds of central processing units. Results show that the method gives extremely accurate attenuation predictions, achieves super-linear speedup over hundreds of CPUs, and solves upward of 25 million complex equations in a quarter of an hour.
Method and apparatus for optical encoding with compressible imaging
NASA Technical Reports Server (NTRS)
Leviton, Douglas B. (Inventor)
2006-01-01
The present invention presents an optical encoder with increased conversion rates. Improvement in the conversion rate is a result of combining changes in the pattern recognition encoder's scale pattern with an image sensor readout technique which takes full advantage of those changes, and lends itself to operation by modern, high-speed, ultra-compact microprocessors and digital signal processors (DSP) or field programmable gate array (FPGA) logic elements which can process encoder scale images at the highest speeds. Through these improvements, all three components of conversion time (reciprocal conversion rate)--namely exposure time, image readout time, and image processing time--are minimized.
Monte Carlo calculations of lunar regolith thickness distributions.
NASA Technical Reports Server (NTRS)
Oberbeck, V. R.; Quaide, W. L.; Mahan, M.; Paulson, J.
1973-01-01
It is pointed out that none of the existing models of lunar regolith evolution take into account the relationship between regolith thickness, crater shape, and volume of debris ejected. The results of a Monte Carlo computer simulation of regolith evolution are presented. The simulation was designed to consider the full effect of the buffering regolith through calculation of the amount of debris produced by any given crater as a function of the amount of debris present at the site of the crater at the time of crater formation. The method is essentially an improved version of the Oberbeck and Quaide (1968) model.
NASA Astrophysics Data System (ADS)
Lee, Byungjoon; Min, Chohong
2018-05-01
We introduce a stable method for solving the incompressible Navier-Stokes equations with variable density and viscosity. Our method is stable in the sense that it does not increase the total energy of dynamics that is the sum of kinetic energy and potential energy. Instead of velocity, a new state variable is taken so that the kinetic energy is formulated by the L2 norm of the new variable. Navier-Stokes equations are rephrased with respect to the new variable, and a stable time discretization for the rephrased equations is presented. Taking into consideration the incompressibility in the Marker-And-Cell (MAC) grid, we present a modified Lax-Friedrich method that is L2 stable. Utilizing the discrete integration-by-parts in MAC grid and the modified Lax-Friedrich method, the time discretization is fully discretized. An explicit CFL condition for the stability of the full discretization is given and mathematically proved.
Extending the Peak Bandwidth of Parameters for Softmax Selection in Reinforcement Learning.
Iwata, Kazunori
2016-05-11
Softmax selection is one of the most popular methods for action selection in reinforcement learning. Although various recently proposed methods may be more effective with full parameter tuning, implementing a complicated method that requires the tuning of many parameters can be difficult. Thus, softmax selection is still worth revisiting, considering the cost savings of its implementation and tuning. In fact, this method works adequately in practice with only one parameter appropriately set for the environment. The aim of this paper is to improve the variable setting of this method to extend the bandwidth of good parameters, thereby reducing the cost of implementation and parameter tuning. To achieve this, we take advantage of the asymptotic equipartition property in a Markov decision process to extend the peak bandwidth of softmax selection. Using a variety of episodic tasks, we show that our setting is effective in extending the bandwidth and that it yields a better policy in terms of stability. The bandwidth is quantitatively assessed in a series of statistical tests.
Practice It: Deep Conscious Breathing Exercise
No time to sit and breathe? No problem; take your breathing practice with you! Deep conscious breathing can also be done with the eyes open wherever you happen to be—simply pause and take two to three full deep breaths (inhale deeply and exhale completely).
50 CFR 217.82 - Permissible methods of taking.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Permissible methods of taking. 217.82 Section 217.82 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC... School (NEODS) Training Operations § 217.82 Permissible methods of taking. (a) Under Letters of...
50 CFR 217.172 - Permissible methods of taking.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Permissible methods of taking. 217.172 Section 217.172 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC... Neptune Liquefied Natural Gas Facility Off Massachusetts § 217.172 Permissible methods of taking. (a...
50 CFR 217.202 - Permissible methods of taking.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 10 2012-10-01 2012-10-01 false Permissible methods of taking. 217.202 Section 217.202 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC... Redevelopment Project § 217.202 Permissible methods of taking. (a) Under Letters of Authorization issued...
50 CFR 217.82 - Permissible methods of taking.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 10 2012-10-01 2012-10-01 false Permissible methods of taking. 217.82 Section 217.82 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC... School (NEODS) Training Operations § 217.82 Permissible methods of taking. (a) Under Letters of...
50 CFR 217.172 - Permissible methods of taking.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 10 2012-10-01 2012-10-01 false Permissible methods of taking. 217.172 Section 217.172 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC... Neptune Liquefied Natural Gas Facility Off Massachusetts § 217.172 Permissible methods of taking. (a...
50 CFR 217.202 - Permissible methods of taking.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Permissible methods of taking. 217.202 Section 217.202 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC... Redevelopment Project § 217.202 Permissible methods of taking. (a) Under Letters of Authorization issued...
NASA Astrophysics Data System (ADS)
Furquan, Mohammad; Raj Khatribail, Anish; Vijayalakshmi, Savithri; Mitra, Sagar
2018-04-01
Silicon is an attractive anode material for Li-ion cells, which can provide energy density 30% higher than any of the today's commercial Li-ion cells. In the current study, environmentally benign, high abundant, and low cost sand (SiO2) source has been used to prepare nano-silicon via scalable metallothermic reduction method using micro wave heating. In this research, we have developed and optimized a method to synthesis high purity nano silicon powder that takes only 5 min microwave heating of sand and magnesium mixture at 800 °C. Carbon coated nano-silicon electrode material is prepared by a unique method of coating, polymerization and finally in-situ carbonization of furfuryl alcohol on to the high purity nano-silicon. The electrochemical performance of a half cell using the carbon coated high purity Si is showed a stable capacity of 1500 mAh g-1 at 6 A g-1 for over 200 cycles. A full cell is fabricated using lithium cobalt oxide having thickness ≈56 μm as cathode and carbon coated silicon thin anode of thickness ≈9 μm. The fabricated full cell of compact size exhibits excellent volumetric capacity retention of 1649 mAh cm-3 at 0.5 C rate (C = 4200 mAh g-1) and extended cycle life (600 cycles). The full cell is demonstrated on an LED lantern and LED display board.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Wei-Zhong, E-mail: xmjbq007@163.com; Yang, Zheng-Qiang, E-mail: ntdoctoryang@hotmail.com; Liu, Sheng, E-mail: liusheng1137@sina.com
PurposeTo evaluate the clinical effectiveness of a newly designed stent for the treatment of malignant distal duodenal stenosis.MethodsFrom March 2011 to May 2013, six patients with malignant duodenal stenosis underwent fluoroscopically guided placement of the new duodenal stent consisting of braided, nested stent wires, and a delivery system with a metallic mesh inner layer. Primary diseases were pancreatic cancer in three patients, gastric cancer in two patients, and endometrial stromal sarcoma in one patient. Duodenal obstructions were located in the horizontal part in two patients, the ascending part in two patients, and the duodenojejunal flexure in two patients. Technical success,more » defined as the successful stent deployment, clinical symptoms before and after the procedure, and complications were evaluated.ResultsTechnical success was achieved in all patients. No major complications were observed. Before treatment, two patients could not take any food and the gastric outlet obstruction scoring system (GOOSS) score was 0; the other four patients could take only liquids orally (GOOSS score = 1). After treatment, five patients could take soft food (GOOSS score = 2) and one patient could take a full diet (GOOSS score = 3). The mean duration of primary stent patency was 115.7 days.ConclusionsThe newly designed stent is associated with a high degree of technical success and good clinical outcome and may be clinically effective in the management of malignant distal duodenal obstruction.« less
50 CFR 217.72 - Permissible methods of taking.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Permissible methods of taking. 217.72 Section 217.72 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC... Kodiak Launch Complex, Alaska § 217.72 Permissible methods of taking. (a) Under a Letter of Authorization...
50 CFR 217.13 - Permissible methods of taking.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 10 2012-10-01 2012-10-01 false Permissible methods of taking. 217.13 Section 217.13 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC... at Monterey Bay National Marine Sanctuary, CA § 217.13 Permissible methods of taking. (a) Under LOAs...
50 CFR 217.153 - Permissible methods of taking.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Permissible methods of taking. 217.153 Section 217.153 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC... Natural Gas Deepwater Port in the Gulf of Mexico § 217.153 Permissible methods of taking. (a) Under LOAs...
50 CFR 218.171 - Permissible methods of taking.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Permissible methods of taking. 218.171 Section 218.171 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC... Complex and the Associated Proposed Extensions Study Area § 218.171 Permissible methods of taking. (a...
50 CFR 217.13 - Permissible methods of taking.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Permissible methods of taking. 217.13 Section 217.13 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC... at Monterey Bay National Marine Sanctuary, CA § 217.13 Permissible methods of taking. (a) Under LOAs...
50 CFR 216.152 - Permissible methods of taking.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 10 2012-10-01 2012-10-01 false Permissible methods of taking. 216.152 Section 216.152 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC....152 Permissible methods of taking. (a) Under Letters of Authorization issued pursuant to §§ 216.106...
50 CFR 217.72 - Permissible methods of taking.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 10 2012-10-01 2012-10-01 false Permissible methods of taking. 217.72 Section 217.72 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC... Kodiak Launch Complex, Alaska § 217.72 Permissible methods of taking. (a) Under a Letter of Authorization...
50 CFR 216.122 - Permissible methods of taking.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Permissible methods of taking. 216.122 Section 216.122 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC... Permissible methods of taking. (a) Under Letters of Authorization issued pursuant to § 216.106 and 216.127...
50 CFR 216.152 - Permissible methods of taking.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Permissible methods of taking. 216.152 Section 216.152 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC....152 Permissible methods of taking. (a) Under Letters of Authorization issued pursuant to §§ 216.106...
50 CFR 216.122 - Permissible methods of taking.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 10 2012-10-01 2012-10-01 false Permissible methods of taking. 216.122 Section 216.122 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC... Permissible methods of taking. (a) Under Letters of Authorization issued pursuant to § 216.106 and 216.127...
50 CFR 216.213 - Permissible methods of taking.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 10 2012-10-01 2012-10-01 false Permissible methods of taking. 216.213 Section 216.213 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC... methods of taking. The Holder of a Letter of Authorization issued pursuant to § 216.218, may incidentally...
50 CFR 216.252 - Permissible methods of taking.
Code of Federal Regulations, 2010 CFR
2010-10-01
... MAMMALS Taking Marine Mammals Incidental to Conducting Precision Strike Weapon Missions in the Gulf of Mexico § 216.252 Permissible methods of taking. (a) Under Letters of Authorization issued pursuant to...
Coronary Artery Bypass Grafting
... or she will advise you about what you can eat or drink, which medicines to take, and which activities to stop (such as smoking). ... traditional CABG) Full recovery from traditional CABG may take 6 to 12 ... tell you when you can start physical activity again. It varies from person ...
Caregiver Leave-Taking in Spain: Rate, Motivations, and Barriers.
Rogero-García, Jesús; García-Sainz, Cristina
2016-01-01
This paper aims to (1) determine the rate of (full- and part-time) caregiver leave-taking in Spain, (2) identify the reasons conducive to a more intense use of this resource, and (3) ascertain the main obstacles to its use, as perceived by caregivers. All 896 people covered by the sample were engaging in paid work and had cared for dependent adults in the last 12 years. This resource, in particular the full-time alternative, was found to be a minority option. The data showed that legal, work-related, and family and gender norm issues are the four types of factors that determine the decision to take such leaves. The most significant obstacles to their use are the forfeiture of income and the risk of losing one's job. Our results suggest that income replacement during a leave would increase the take-up of these resources. Moreover, enlargement of public care services would promote the use of leave as a free choice of caregivers.
Multiscale Structure of UXO Site Characterization: Spatial Estimation and Uncertainty Quantification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ostrouchov, George; Doll, William E.; Beard, Les P.
2009-01-01
Unexploded ordnance (UXO) site characterization must consider both how the contamination is generated and how we observe that contamination. Within the generation and observation processes, dependence structures can be exploited at multiple scales. We describe a conceptual site characterization process, the dependence structures available at several scales, and consider their statistical estimation aspects. It is evident that most of the statistical methods that are needed to address the estimation problems are known but their application-specific implementation may not be available. We demonstrate estimation at one scale and propose a representation for site contamination intensity that takes full account of uncertainty,more » is flexible enough to answer regulatory requirements, and is a practical tool for managing detailed spatial site characterization and remediation. The representation is based on point process spatial estimation methods that require modern computational resources for practical application. These methods have provisions for including prior and covariate information.« less
Thermo-electrochemical instrumentation of cylindrical Li-ion cells
NASA Astrophysics Data System (ADS)
McTurk, Euan; Amietszajew, Tazdin; Fleming, Joe; Bhagat, Rohit
2018-03-01
The performance evaluation and optimisation of commercially available lithium-ion cells is typically based upon their full cell potential and surface temperature measurements, despite these parameters not being fully representative of the electrochemical processes taking place in the core of the cell or at each electrode. Several methods were devised to obtain the cell core temperature and electrode-specific potential profiles of cylindrical Li-ion cells. Optical fibres with Bragg Gratings were found to produce reliable core temperature data, while their small mechanical profile allowed for low-impact instrumentation method. A pure metallic lithium reference electrode insertion method was identified, avoiding interference with other elements of the cell while ensuring good contact, enabling in-situ observations of the per-electrode electrochemical responses. Our thermo-electrochemical instrumentation technique has enabled us to collect unprecedented cell data, and has subsequently been used in advanced studies exploring the real-world performance limits of commercial cells.
The genealogy of personal names: towards a more productive method in historical onomastics.
Kotilainen, Sofia
2011-01-01
It is essential to combine genealogical and collective biographical approaches with network analysis if one wants to take full advantage of the evidence provided by (hereditary) personal names in historical and linguistic onomastic research. The naming practices of rural families and clans from the 18th to the 20th century can bring us much fresh information about their enduring attitudes and values, as well as about other mentalities of everyday life. Personal names were cultural symbols that contained socially shared meanings. With the help of genealogical method it is possible to obtain a more nuanced understanding of these past naming practices, for example by comparing the conventions of different communities. A long-term and systematic empirical research also enables us to dispute certain earlier assumptions that have been taken for granted in historical onomastics. Therefore, the genealogical method is crucial in studying the criteria for the choices of personal names in the past.
The Flash ADC system and PMT waveform reconstruction for the Daya Bay experiment
NASA Astrophysics Data System (ADS)
Huang, Yongbo; Chang, Jinfan; Cheng, Yaping; Chen, Zhang; Hu, Jun; Ji, Xiaolu; Li, Fei; Li, Jin; Li, Qiuju; Qian, Xin; Jetter, Soeren; Wang, Wei; Wang, Zheng; Xu, Yu; Yu, Zeyuan
2018-07-01
To better understand the energy response of the Antineutrino Detector (AD), the Daya Bay Reactor Neutrino Experiment installed a full Flash ADC readout system on one AD that allowed for simultaneous data taking with the current readout system. This paper presents the design, data acquisition, and simulation of the Flash ADC system, and focuses on the PMT waveform reconstruction algorithms. For liquid scintillator calorimetry, the most critical requirement to waveform reconstruction is linearity. Several common reconstruction methods were tested but the linearity performance was not satisfactory. A new method based on the deconvolution technique was developed with 1% residual non-linearity, which fulfills the requirement. The performance was validated with both data and Monte Carlo (MC) simulations, and 1% consistency between them has been achieved.
NASA Astrophysics Data System (ADS)
Brozis, Mirosław; Świderski, Kamil
2018-05-01
Our students built a full-size, mobile planetarium in three weeks. The planetarium was built with commonly available, cheap construction materials. Our priorities were mobility, possibility of quick assembly and reassembly and the students’ availability of materials in every place in the world. The students calculated all the parameters of the planetarium’s construction themselves, chose materials of appropriate technical parameters, built the planetarium’s framework, elaborated the methods of projections and sounding. Taking into consideration the spectators’ comfort they also designed systems of air conditioning and cooling. The project is completely consistent with the STEM and even the STEAM method. The artistic factor of the students’ work was revealed during the visualisation of planetarium projections and its adornment. The final product of their work is a functional planetarium and a manual for its construction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanaka, H., E-mail: tanaka@semicon.kuee.kyoto-u.ac.jp; Morioka, N.; Mori, S.
2014-02-07
The conduction band structure and electron effective mass of GaAs nanowires with various cross-sectional shapes and orientations were calculated by two methods, a tight-binding method and an effective mass equation taking the bulk full-band structure into account. The effective mass of nanowires increases as the cross-sectional size decreases, and this increase in effective mass depends on the orientations and substrate faces of nanowires. Among [001], [110], and [111]-oriented rectangular cross-sectional GaAs nanowires, [110]-oriented nanowires with wider width along the [001] direction showed the lightest effective mass. This dependence originates from the anisotropy of the Γ valley of bulk GaAs. Themore » relationship between effective mass and bulk band structure is discussed.« less
NASA Astrophysics Data System (ADS)
Bagli, Enrico; Guidi, Vincenzo
2013-08-01
A toolkit for the simulation of coherent interactions between high-energy charged particles and complex crystal structures, called DYNECHARM++ has been developed. The code has been written in C++ language taking advantage of this object-oriented programing method. The code is capable to evaluating the electrical characteristics of complex atomic structures and to simulate and track the particle trajectory within them. Calculation method of electrical characteristics based on their expansion in Fourier series has been adopted. Two different approaches to simulate the interaction have been adopted, relying on the full integration of particle trajectories under the continuum potential approximation and on the definition of cross-sections of coherent processes. Finally, the code has proved to reproduce experimental results and to simulate interaction of charged particles with complex structures.
Why Online Education Will Attain Full Scale
ERIC Educational Resources Information Center
Sener, John
2010-01-01
Online higher education has attained scale and is poised to take the next step in its growth. Although significant obstacles to a full scale adoption of online education remain, we will see full scale adoption of online higher education within the next five to ten years. Practically all higher education students will experience online education in…
Circuit-based versus full-wave modelling of active microwave circuits
NASA Astrophysics Data System (ADS)
Bukvić, Branko; Ilić, Andjelija Ž.; Ilić, Milan M.
2018-03-01
Modern full-wave computational tools enable rigorous simulations of linear parts of complex microwave circuits within minutes, taking into account all physical electromagnetic (EM) phenomena. Non-linear components and other discrete elements of the hybrid microwave circuit are then easily added within the circuit simulator. This combined full-wave and circuit-based analysis is a must in the final stages of the circuit design, although initial designs and optimisations are still faster and more comfortably done completely in the circuit-based environment, which offers real-time solutions at the expense of accuracy. However, due to insufficient information and general lack of specific case studies, practitioners still struggle when choosing an appropriate analysis method, or a component model, because different choices lead to different solutions, often with uncertain accuracy and unexplained discrepancies arising between the simulations and measurements. We here design a reconfigurable power amplifier, as a case study, using both circuit-based solver and a full-wave EM solver. We compare numerical simulations with measurements on the manufactured prototypes, discussing the obtained differences, pointing out the importance of measured parameters de-embedding, appropriate modelling of discrete components and giving specific recipes for good modelling practices.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-27
... permissible methods of taking, other means of effecting the least practicable impact on the species or stock... non-destructive sampling methods to monitor rocky intertidal algal and invertebrate species abundances... and random quadrat are sampled, using methods described by Foster et al. (1991) and Dethier et al...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-06
... mathematical methods would be appropriate for estimating the probability of marine mammals entering the various safety zones undetected; (3) use the mathematical methods determined above to assess the effectiveness of..., NMFS must prescribe regulations that include permissible methods of taking and other means effecting...
Singh, Jay; Chattterjee, Kalyan; Vishwakarma, C B
2018-01-01
Load frequency controller has been designed for reduced order model of single area and two-area reheat hydro-thermal power system through internal model control - proportional integral derivative (IMC-PID) control techniques. The controller design method is based on two degree of freedom (2DOF) internal model control which combines with model order reduction technique. Here, in spite of taking full order system model a reduced order model has been considered for 2DOF-IMC-PID design and the designed controller is directly applied to full order system model. The Logarithmic based model order reduction technique is proposed to reduce the single and two-area high order power systems for the application of controller design.The proposed IMC-PID design of reduced order model achieves good dynamic response and robustness against load disturbance with the original high order system. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Incentive schemes in development of socio-economic systems
NASA Astrophysics Data System (ADS)
Grachev, V. V.; Ivushkin, K. A.; Myshlyaev, L. P.
2018-05-01
The paper is devoted to the study of incentive schemes when developing socio-economic systems. The article analyzes the existing incentive schemes. It is established that the traditional incentive mechanisms do not fully take into account the specifics of the creation of each socio-economic system and, as a rule, are difficult to implement. The incentive schemes based on the full-scale simulation approach, which allow the most complete information from the existing projects of creation of socio-economic systems to be extracted, are proposed. The statement of the problem is given, the method and algorithm of the full-scale simulation study of the efficiency of incentive functions is developed. The results of the study are presented. It is shown that the use of quadratic and piecewise linear functions of incentive allows the time and costs for creating social and economic systems to be reduced by 10%-15%.
Klarhöfer, Markus; Dilharreguy, Bixente; van Gelderen, Peter; Moonen, Chrit T W
2003-10-01
A 3D sequence for dynamic susceptibility imaging is proposed which combines echo-shifting principles (such as PRESTO), sensitivity encoding (SENSE), and partial-Fourier acquisition. The method uses a moderate SENSE factor of 2 and takes advantage of an alternating partial k-space acquisition in the "slow" phase encode direction allowing an iterative reconstruction using high-resolution phase estimates. Offering an isotropic spatial resolution of 4 x 4 x 4 mm(3), the novel sequence covers the whole brain including parts of the cerebellum in 0.5 sec. Its temporal signal stability is comparable to that of a full-Fourier, full-FOV EPI sequence having the same dynamic scan time but much less brain coverage. Initial functional MRI experiments showed consistent activation in the motor cortex with an average signal change slightly less than that of EPI. Copyright 2003 Wiley-Liss, Inc.
Small-angle scattering study of Aspergillus awamori glycoprotein glucoamylase
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidt, A. E., E-mail: schmidt@omrb.pnpi.spb.ru; Shvetsov, A. V.; Kuklin, A. I.
2016-01-15
Glucoamylase from fungus Aspergillus awamori is glycoside hydrolase that catalyzes the hydrolysis of α-1,4- and α-1,6-glucosidic bonds in glucose polymers and oligomers. This glycoprotein consists of a catalytic domain and a starch-binding domain connected by an O-glycosylated polypeptide chain. The conformation of the linker, the relative arrangement of the domains, and the structure of the full-length enzyme are unknown. The structure of the recombinant glucoamylase GA1 was studied by molecular modelling and small-angle neutron scattering (SANS) methods. The experimental SANS data provide evidence that glucoamylase exists as a monomer in solution and contains a glycoside component, which makes a substantialmore » contribution to the scattering. The model of full-length glucoamylase, which was calculated without taking into account the effect of glycosylation, is consistent with the experimental data and has a radius of gyration of 33.4 ± 0.6 Å.« less
NASA Astrophysics Data System (ADS)
Kaasbjerg, Kristen; Belzig, Wolfgang
2015-06-01
We develop a conceptually simple scheme based on a master-equation approach to evaluate the full-counting statistics (FCS) of elastic and inelastic off-resonant tunneling (cotunneling) in quantum dots (QDs) and molecules. We demonstrate the method by showing that it reproduces known results for the FCS and shot noise in the cotunneling regime. For a QD with an excited state, we obtain an analytic expression for the cumulant generating function (CGF) taking into account elastic and inelastic cotunneling. From the CGF we find that the shot noise above the inelastic threshold in the cotunneling regime is inherently super-Poissonian when external relaxation is weak. Furthermore, a complete picture of the shot noise across the different transport regimes is given. In the case where the excited state is a blocking state, strongly enhanced shot noise is predicted both in the resonant and cotunneling regimes.
Fast-neutron/gamma-ray radiography scanner for the detection of contraband in air cargo containers
NASA Astrophysics Data System (ADS)
Eberhardt, J.; Liu, Y.; Rainey, S.; Roach, G.; Sowerby, B.; Stevens, R.; Tickner, J.
2006-05-01
There is a worldwide need for efficient inspection of cargo containers at airports, seaports and road border crossings. The main objectives are the detection of contraband such as illicit drugs, explosives and weapons. Due to the large volume of cargo passing through Australia's airports every day, it is critical that any scanning system should be capable of working on unpacked or consolidated cargo, taking at most 1-2 minutes per container. CSIRO has developed a fast-neutron/gamma-ray radiography (FNGR) method for the rapid screening of air freight. By combining radiographs obtained using 14 MeV neutrons and 60Co gamma-rays, high resolution images showing both density and material composition are obtained. A near full-scale prototype scanner has been successfully tested in the laboratory. With the support of the Australian Customs Service, a full-scale scanner has recently been installed and commissioned at Brisbane International Airport.
Turn-Taking, Turn-Giving, and Alzheimer's Disease.
ERIC Educational Resources Information Center
Sabat, Steven R.
1991-01-01
Analysis of a conversation with an Alzheimer's disease sufferer with word-finding problems revealed that social context, speaker characteristics, and awareness of the other speaker's perspective governed such conversational aspects of turn taking and turn giving, which allowed full development of both speakers' personas. (23 references) (CB)
Hamiltonian Monte Carlo Inversion of Seismic Sources in Complex Media
NASA Astrophysics Data System (ADS)
Fichtner, A.; Simutė, S.
2017-12-01
We present a probabilistic seismic source inversion method that properly accounts for 3D heterogeneous Earth structure and provides full uncertainty information on the timing, location and mechanism of the event. Our method rests on two essential elements: (1) reciprocity and spectral-element simulations in complex media, and (2) Hamiltonian Monte Carlo sampling that requires only a small amount of test models. Using spectral-element simulations of 3D, visco-elastic, anisotropic wave propagation, we precompute a data base of the strain tensor in time and space by placing sources at the positions of receivers. Exploiting reciprocity, this receiver-side strain data base can be used to promptly compute synthetic seismograms at the receiver locations for any hypothetical source within the volume of interest. The rapid solution of the forward problem enables a Bayesian solution of the inverse problem. For this, we developed a variant of Hamiltonian Monte Carlo (HMC) sampling. Taking advantage of easily computable derivatives, HMC converges to the posterior probability density with orders of magnitude less samples than derivative-free Monte Carlo methods. (Exact numbers depend on observational errors and the quality of the prior). We apply our method to the Japanese Islands region where we previously constrained 3D structure of the crust and upper mantle using full-waveform inversion with a minimum period of around 15 s.
Computer-assisted versus oral-and-written dietary history taking for diabetes mellitus.
Wei, Igor; Pappas, Yannis; Car, Josip; Sheikh, Aziz; Majeed, Azeem
2011-12-07
Diabetes is a chronic illness characterised by insulin resistance or deficiency, resulting in elevated glycosylated haemoglobin A1c (HbA1c) levels. Diet and adherence to dietary advice is associated with lower HbA1c levels and control of disease. Dietary history may be an effective clinical tool for diabetes management and has traditionally been taken by oral-and-written methods, although it can also be collected using computer-assisted history taking systems (CAHTS). Although CAHTS were first described in the 1960s, there remains uncertainty about the impact of these methods on dietary history collection, clinical care and patient outcomes such as quality of life. To assess the effects of computer-assisted versus oral-and-written dietary history taking on patient outcomes for diabetes mellitus. We searched The Cochrane Library (issue 6, 2011), MEDLINE (January 1985 to June 2011), EMBASE (January 1980 to June 2011) and CINAHL (January 1981 to June 2011). Reference lists of obtained articles were also pursued further and no limits were imposed on languages and publication status. Randomised controlled trials of computer-assisted versus oral-and-written history taking in patients with diabetes mellitus. Two authors independently scanned the title and abstract of retrieved articles. Potentially relevant articles were investigated as full text. Studies that met the inclusion criteria were abstracted for relevant population and intervention characteristics with any disagreements resolved by discussion, or by a third party. Risk of bias was similarly assessed independently. Of the 2991 studies retrieved, only one study with 38 study participants compared the two methods of history taking over a total of eight weeks. The authors found that as patients became increasingly familiar with using CAHTS, the correlation between patients' food records and computer assessments improved. Reported fat intake decreased in the control group and increased when queried by the computer. The effect of the intervention on the management of diabetes mellitus and blood glucose levels was not reported. Risk of bias was considered moderate for this study. Based on one small study judged to be of moderate risk of bias, we tentatively conclude that CAHTS may be well received by study participants and potentially offer time saving in practice. However, more robust studies with larger sample sizes are needed to confirm these. We cannot draw on any conclusions in relation to any other clinical outcomes at this stage.
NASA Astrophysics Data System (ADS)
Fu, Rongxin; Li, Qi; Zhang, Junqi; Wang, Ruliang; Lin, Xue; Xue, Ning; Su, Ya; Jiang, Kai; Huang, Guoliang
2016-10-01
Label free point mutation detection is particularly momentous in the area of biomedical research and clinical diagnosis since gene mutations naturally occur and bring about highly fatal diseases. In this paper, a label free and high sensitive approach is proposed for point mutation detection based on hyperspectral interferometry. A hybridization strategy is designed to discriminate a single-base substitution with sequence-specific DNA ligase. Double-strand structures will take place only if added oligonucleotides are perfectly paired to the probe sequence. The proposed approach takes full use of the inherent conformation of double-strand DNA molecules on the substrate and a spectrum analysis method is established to point out the sub-nanoscale thickness variation, which benefits to high sensitive mutation detection. The limit of detection reach 4pg/mm2 according to the experimental result. A lung cancer gene point mutation was demonstrated, proving the high selectivity and multiplex analysis capability of the proposed biosensor.
Maintaining relationships with your patients by maximizing your online presence.
Donnelly, John; Kaaihue, Maarit
2011-01-01
Medical practices that take full advantage of today's online consumer-driven culture will leave other practices in their wake. With today's modern consumers looking to the Internet more and more for finding medical solutions for their family, it is imperative that your practice uses all of the tools available for creating and maintaining its online presence. We all know that having a functional Web site these days is a necessity for practically any business in any industry; however, taking your online presence further by using a few techniques can set up your practice for great success. Your online marketing should help your practice with managing patient relationships at all levels. To best reach this goal, continually analyzing data and updating your online marketing approach will help further drive leads and conversions. Using a few search engine optimization techniques as well as optimal design and marketing methods will allow you to more easily find prospective patients, build trust and credibility with your current patients, and manage your reputation.
NASA Astrophysics Data System (ADS)
Nasedkin, A. V.
2017-01-01
This research presents the new size-dependent models of piezoelectric materials oriented to finite element applications. The proposed models include the facilities of taking into account different mechanisms of damping for mechanical and electric fields. The coupled models also incorporate the equations of the theory of acoustics for viscous fluids. In particular cases, these models permit to use the mode superposition method with full separation of the finite element systems into independent equations for the independent modes for transient and harmonic problems. The main boundary conditions were supplemented with the facilities of taking into account the coupled surface effects, allowing to explore the nanoscale piezoelectric materials in the framework of theories of continuous media with surface stresses and their generalizations. For the considered problems we have implemented the finite element technologies and various numerical algorithms to maintain a symmetrical structure of the finite element quasi-definite matrices (matrix structure for the problems with a saddle point).
29 CFR 541.602 - Salary basis.
Code of Federal Regulations, 2013 CFR
2013-07-01
... takes unpaid leave under the Family and Medical Leave Act. Rather, when an exempt employee takes unpaid leave under the Family and Medical Leave Act, an employer may pay a proportionate part of the full... four hours of unpaid leave under the Family and Medical Leave Act, the employer could deduct 10 percent...
Guidelines for a Scientific Approach to Critical Thinking Assessment
ERIC Educational Resources Information Center
Bensley, D. Alan; Murtagh, Michael P.
2012-01-01
Assessment of student learning outcomes can be a powerful tool for improvement of instruction when a scientific approach is taken; unfortunately, many educators do not take full advantage of this approach. This article examines benefits of taking a scientific approach to critical thinking assessment and proposes guidelines for planning,…
Alcoholism in Athletes: New Directions for Treatment.
ERIC Educational Resources Information Center
Samples, Pat
1989-01-01
Discusses steps that professional sports organizations are taking to identify athletes with drinking problems and help them reach full recovery. Many teams are taking preventive steps such as offering information about the dangers of alcohol, issuing new policies dealing with players' rights and providing for employee assistance programs. (SM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shu, Dewu; Xie, Xiaorong; Jiang, Qirong
With steady increase of power electronic devices and nonlinear dynamic loads in large scale AC/DC systems, the traditional hybrid simulation method, which incorporates these components into a single EMT subsystem and hence causes great difficulty for network partitioning and significant deterioration in simulation efficiency. To resolve these issues, a novel distributed hybrid simulation method is proposed in this paper. The key to realize this method is a distinct interfacing technique, which includes: i) a new approach based on the two-level Schur complement to update the interfaces by taking full consideration of the couplings between different EMT subsystems; and ii) amore » combined interaction protocol to further improve the efficiency while guaranteeing the simulation accuracy. The advantages of the proposed method in terms of both efficiency and accuracy have been verified by using it for the simulation study of an AC/DC hybrid system including a two-terminal VSC-HVDC and nonlinear dynamic loads.« less
Intelligent Systems: Terrestrial Observation and Prediction Using Remote Sensing Data
NASA Technical Reports Server (NTRS)
Coughlan, Joseph C.
2005-01-01
NASA has made science and technology investments to better utilize its large space-borne remote sensing data holdings of the Earth. With the launch of Terra, NASA created a data-rich environment where the challenge is to fully utilize the data collected from EOS however, despite unprecedented amounts of observed data, there is a need for increasing the frequency, resolution, and diversity of observations. Current terrestrial models that use remote sensing data were constructed in a relatively data and compute limited era and do not take full advantage of on-line learning methods and assimilation techniques that can exploit these data. NASA has invested in visualization, data mining and knowledge discovery methods which have facilitated data exploitation, but these methods are insufficient for improving Earth science models that have extensive background knowledge nor do these methods refine understanding of complex processes. Investing in interdisciplinary teams that include computational scientists can lead to new models and systems for online operation and analysis of data that can autonomously improve in prediction skill over time.
Strategies for identifying new prions in yeast
MacLea, Kyle S
2011-01-01
The unexpected discovery of two prions, [URE3] and [PSI+], in Saccharomyces cerevisiae led to questions about how many other proteins could undergo similar prion-based structural conversions. However, [URE3] and [PSI+] were discovered by serendipity in genetic screens. Cataloging the full range of prions in yeast or in other organisms will therefore require more systematic search methods. Taking advantage of some of the unique features of prions, various researchers have developed bioinformatic and experimental methods for identifying novel prion proteins. These methods have generated long lists of prion candidates. The systematic testing of some of these prion candidates has led to notable successes; however, even in yeast, where rapid growth rate and ease of genetic manipulation aid in testing for prion activity, such candidate testing is laborious. Development of better methods to winnow the field of prion candidates will greatly aid in the discovery of new prions, both in yeast and in other organisms, and help us to better understand the role of prions in biology. PMID:22052351
A joint sparse representation-based method for double-trial evoked potentials estimation.
Yu, Nannan; Liu, Haikuan; Wang, Xiaoyan; Lu, Hanbing
2013-12-01
In this paper, we present a novel approach to solving an evoked potentials estimating problem. Generally, the evoked potentials in two consecutive trials obtained by repeated identical stimuli of the nerves are extremely similar. In order to trace evoked potentials, we propose a joint sparse representation-based double-trial evoked potentials estimation method, taking full advantage of this similarity. The estimation process is performed in three stages: first, according to the similarity of evoked potentials and the randomness of a spontaneous electroencephalogram, the two consecutive observations of evoked potentials are considered as superpositions of the common component and the unique components; second, making use of their characteristics, the two sparse dictionaries are constructed; and finally, we apply the joint sparse representation method in order to extract the common component of double-trial observations, instead of the evoked potential in each trial. A series of experiments carried out on simulated and human test responses confirmed the superior performance of our method. © 2013 Elsevier Ltd. Published by Elsevier Ltd. All rights reserved.
Gehring, Tiago V.; Luksys, Gediminas; Sandi, Carmen; Vasilaki, Eleni
2015-01-01
The Morris Water Maze is a widely used task in studies of spatial learning with rodents. Classical performance measures of animals in the Morris Water Maze include the escape latency, and the cumulative distance to the platform. Other methods focus on classifying trajectory patterns to stereotypical classes representing different animal strategies. However, these approaches typically consider trajectories as a whole, and as a consequence they assign one full trajectory to one class, whereas animals often switch between these strategies, and their corresponding classes, within a single trial. To this end, we take a different approach: we look for segments of diverse animal behaviour within one trial and employ a semi-automated classification method for identifying the various strategies exhibited by the animals within a trial. Our method allows us to reveal significant and systematic differences in the exploration strategies of two animal groups (stressed, non-stressed), that would be unobserved by earlier methods. PMID:26423140
Bourdel, Nicolas; Chauvet, Pauline; Tognazza, Enrica; Pereira, Bruno; Botchorishvili, Revaz; Canis, Michel
2016-01-01
Our objective was to identify the most accurate method of endometrial sampling for the diagnosis of complex atypical hyperplasia (CAH), and the related risk of underestimation of endometrial cancer. We conducted a systematic literature search in PubMed and EMBASE (January 1999-September 2013) to identify all registered articles on this subject. Studies were selected with a 2-step method. First, titles and abstracts were analyzed by 2 reviewers, and 69 relevant articles were selected for full reading. Then, the full articles were evaluated to determine whether full inclusion criteria were met. We selected 27 studies, taking into consideration the comparison between histology of endometrial hyperplasia obtained by diagnostic tests of interest (uterine curettage, hysteroscopically guided biopsy, or hysteroscopic endometrial resection) and subsequent results of hysterectomy. Analysis of the studies reviewed focused on 1106 patients with a preoperative diagnosis of atypical endometrial hyperplasia. The mean risk of finding endometrial cancer at hysterectomy after atypical endometrial hyperplasia diagnosed by uterine curettage was 32.7% (95% confidence interval [CI], 26.2-39.9), with a risk of 45.3% (95% CI, 32.8-58.5) after hysteroscopically guided biopsy and 5.8% (95% CI, 0.8-31.7) after hysteroscopic resection. In total, the risk of underestimation of endometrial cancer reaches a very high rate in patients with CAH using the classic method of evaluation (i.e., uterine curettage or hysteroscopically guided biopsy). This rate of underdiagnosed endometrial cancer leads to the risk of inappropriate surgical procedures (31.7% of tubal conservation in the data available and no abdominal exploration in 24.6% of the cases). Hysteroscopic resection seems to reduce the risk of underdiagnosed endometrial cancer. Copyright © 2016 AAGL. Published by Elsevier Inc. All rights reserved.
[Study on the indexes of forensic identification by the occlusal-facial digital radiology].
Gao, Dong; Wang, Hu; Hu, Jin-liang; Xu, Zhe; Deng, Zhen-hua
2006-02-01
To discuss the coding of full dentition with 32 locations and measure the characteristics of some bony indexes in occlusal-facial digital radiology (DR). To select randomly three hundred DR orthopantomogram and code the full dentition, then analyze the diversity of dental patterns. To select randomly one hundred DR lateral cephalogram and measure six indexes (N-S,N-Me,Cd-Gn,Cd-Go,NP-SN,MP-SN) separately by one odontologist and one trained forensic graduate student, then calculate the coefficient variation (CV) of every index and take a correlation analysis for the consistency between two measurements. (1) The total diversity of 300 dental patterns was 75%.It was a very high value. (2)All six quantitative variables had comparatively high CV value.(3) After the linear correlation analysis between two measurements, all six coefficient correlations were close to 1. This indicated that the measurements were stable and consistent. The method of coding full dentition in DR orthopantomogram and measuring six bony indexes in DR lateral cephalogram can be used to forensic identification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Robert E.; Overy, Catherine; Opalka, Daniel
Unbiased stochastic sampling of the one- and two-body reduced density matrices is achieved in full configuration interaction quantum Monte Carlo with the introduction of a second, “replica” ensemble of walkers, whose population evolves in imaginary time independently from the first and which entails only modest additional computational overheads. The matrices obtained from this approach are shown to be representative of full configuration-interaction quality and hence provide a realistic opportunity to achieve high-quality results for a range of properties whose operators do not necessarily commute with the Hamiltonian. A density-matrix formulated quasi-variational energy estimator having been already proposed and investigated, themore » present work extends the scope of the theory to take in studies of analytic nuclear forces, molecular dipole moments, and polarisabilities, with extensive comparison to exact results where possible. These new results confirm the suitability of the sampling technique and, where sufficiently large basis sets are available, achieve close agreement with experimental values, expanding the scope of the method to new areas of investigation.« less
3D conditional generative adversarial networks for high-quality PET image estimation at low dose.
Wang, Yan; Yu, Biting; Wang, Lei; Zu, Chen; Lalush, David S; Lin, Weili; Wu, Xi; Zhou, Jiliu; Shen, Dinggang; Zhou, Luping
2018-07-01
Positron emission tomography (PET) is a widely used imaging modality, providing insight into both the biochemical and physiological processes of human body. Usually, a full dose radioactive tracer is required to obtain high-quality PET images for clinical needs. This inevitably raises concerns about potential health hazards. On the other hand, dose reduction may cause the increased noise in the reconstructed PET images, which impacts the image quality to a certain extent. In this paper, in order to reduce the radiation exposure while maintaining the high quality of PET images, we propose a novel method based on 3D conditional generative adversarial networks (3D c-GANs) to estimate the high-quality full-dose PET images from low-dose ones. Generative adversarial networks (GANs) include a generator network and a discriminator network which are trained simultaneously with the goal of one beating the other. Similar to GANs, in the proposed 3D c-GANs, we condition the model on an input low-dose PET image and generate a corresponding output full-dose PET image. Specifically, to render the same underlying information between the low-dose and full-dose PET images, a 3D U-net-like deep architecture which can combine hierarchical features by using skip connection is designed as the generator network to synthesize the full-dose image. In order to guarantee the synthesized PET image to be close to the real one, we take into account of the estimation error loss in addition to the discriminator feedback to train the generator network. Furthermore, a concatenated 3D c-GANs based progressive refinement scheme is also proposed to further improve the quality of estimated images. Validation was done on a real human brain dataset including both the normal subjects and the subjects diagnosed as mild cognitive impairment (MCI). Experimental results show that our proposed 3D c-GANs method outperforms the benchmark methods and achieves much better performance than the state-of-the-art methods in both qualitative and quantitative measures. Copyright © 2018 Elsevier Inc. All rights reserved.
Achieving cost-neutrality with long-acting reversible contraceptive methods.
Trussell, James; Hassan, Fareen; Lowin, Julia; Law, Amy; Filonenko, Anna
2015-01-01
This analysis aimed to estimate the average annual cost of available reversible contraceptive methods in the United States. In line with literature suggesting long-acting reversible contraceptive (LARC) methods become increasingly cost-saving with extended duration of use, it aimed to also quantify minimum duration of use required for LARC methods to achieve cost-neutrality relative to other reversible contraceptive methods while taking into consideration discontinuation. A three-state economic model was developed to estimate relative costs of no method (chance), four short-acting reversible (SARC) methods (oral contraceptive, ring, patch and injection) and three LARC methods [implant, copper intrauterine device (IUD) and levonorgestrel intrauterine system (LNG-IUS) 20 mcg/24 h (total content 52 mg)]. The analysis was conducted over a 5-year time horizon in 1000 women aged 20-29 years. Method-specific failure and discontinuation rates were based on published literature. Costs associated with drug acquisition, administration and failure (defined as an unintended pregnancy) were considered. Key model outputs were annual average cost per method and minimum duration of LARC method usage to achieve cost-savings compared to SARC methods. The two least expensive methods were copper IUD ($304 per women, per year) and LNG-IUS 20 mcg/24 h ($308). Cost of SARC methods ranged between $432 (injection) and $730 (patch), per women, per year. A minimum of 2.1 years of LARC usage would result in cost-savings compared to SARC usage. This analysis finds that even if LARC methods are not used for their full durations of efficacy, they become cost-saving relative to SARC methods within 3 years of use. Previous economic arguments in support of using LARC methods have been criticized for not considering that LARC methods are not always used for their full duration of efficacy. This study calculated that cost-savings from LARC methods relative to SARC methods, with discontinuation rates considered, can be realized within 3 years. Copyright © 2014 Elsevier Inc. All rights reserved.
Annotated chemical patent corpus: a gold standard for text mining.
Akhondi, Saber A; Klenner, Alexander G; Tyrchan, Christian; Manchala, Anil K; Boppana, Kiran; Lowe, Daniel; Zimmermann, Marc; Jagarlapudi, Sarma A R P; Sayle, Roger; Kors, Jan A; Muresan, Sorel
2014-01-01
Exploring the chemical and biological space covered by patent applications is crucial in early-stage medicinal chemistry activities. Patent analysis can provide understanding of compound prior art, novelty checking, validation of biological assays, and identification of new starting points for chemical exploration. Extracting chemical and biological entities from patents through manual extraction by expert curators can take substantial amount of time and resources. Text mining methods can help to ease this process. To validate the performance of such methods, a manually annotated patent corpus is essential. In this study we have produced a large gold standard chemical patent corpus. We developed annotation guidelines and selected 200 full patents from the World Intellectual Property Organization, United States Patent and Trademark Office, and European Patent Office. The patents were pre-annotated automatically and made available to four independent annotator groups each consisting of two to ten annotators. The annotators marked chemicals in different subclasses, diseases, targets, and modes of action. Spelling mistakes and spurious line break due to optical character recognition errors were also annotated. A subset of 47 patents was annotated by at least three annotator groups, from which harmonized annotations and inter-annotator agreement scores were derived. One group annotated the full set. The patent corpus includes 400,125 annotations for the full set and 36,537 annotations for the harmonized set. All patents and annotated entities are publicly available at www.biosemantics.org.
Annotated Chemical Patent Corpus: A Gold Standard for Text Mining
Akhondi, Saber A.; Klenner, Alexander G.; Tyrchan, Christian; Manchala, Anil K.; Boppana, Kiran; Lowe, Daniel; Zimmermann, Marc; Jagarlapudi, Sarma A. R. P.; Sayle, Roger; Kors, Jan A.; Muresan, Sorel
2014-01-01
Exploring the chemical and biological space covered by patent applications is crucial in early-stage medicinal chemistry activities. Patent analysis can provide understanding of compound prior art, novelty checking, validation of biological assays, and identification of new starting points for chemical exploration. Extracting chemical and biological entities from patents through manual extraction by expert curators can take substantial amount of time and resources. Text mining methods can help to ease this process. To validate the performance of such methods, a manually annotated patent corpus is essential. In this study we have produced a large gold standard chemical patent corpus. We developed annotation guidelines and selected 200 full patents from the World Intellectual Property Organization, United States Patent and Trademark Office, and European Patent Office. The patents were pre-annotated automatically and made available to four independent annotator groups each consisting of two to ten annotators. The annotators marked chemicals in different subclasses, diseases, targets, and modes of action. Spelling mistakes and spurious line break due to optical character recognition errors were also annotated. A subset of 47 patents was annotated by at least three annotator groups, from which harmonized annotations and inter-annotator agreement scores were derived. One group annotated the full set. The patent corpus includes 400,125 annotations for the full set and 36,537 annotations for the harmonized set. All patents and annotated entities are publicly available at www.biosemantics.org. PMID:25268232
Achieving cost-neutrality with long-acting reversible contraceptive methods⋆
Trussell, James; Hassan, Fareen; Lowin, Julia; Law, Amy; Filonenko, Anna
2014-01-01
Objectives This analysis aimed to estimate the average annual cost of available reversible contraceptive methods in the United States. In line with literature suggesting long-acting reversible contraceptive (LARC) methods become increasingly cost-saving with extended duration of use, it aimed to also quantify minimum duration of use required for LARC methods to achieve cost-neutrality relative to other reversible contraceptive methods while taking into consideration discontinuation. Study design A three-state economic model was developed to estimate relative costs of no method (chance), four short-acting reversible (SARC) methods (oral contraceptive, ring, patch and injection) and three LARC methods [implant, copper intrauterine device (IUD) and levonorgestrel intrauterine system (LNG-IUS) 20 mcg/24 h (total content 52 mg)]. The analysis was conducted over a 5-year time horizon in 1000 women aged 20–29 years. Method-specific failure and discontinuation rates were based on published literature. Costs associated with drug acquisition, administration and failure (defined as an unintended pregnancy) were considered. Key model outputs were annual average cost per method and minimum duration of LARC method usage to achieve cost-savings compared to SARC methods. Results The two least expensive methods were copper IUD ($304 per women, per year) and LNG-IUS 20 mcg/24 h ($308). Cost of SARC methods ranged between $432 (injection) and $730 (patch), per women, per year. A minimum of 2.1 years of LARC usage would result in cost-savings compared to SARC usage. Conclusions This analysis finds that even if LARC methods are not used for their full durations of efficacy, they become cost-saving relative to SARC methods within 3 years of use. Implications Previous economic arguments in support of using LARC methods have been criticized for not considering that LARC methods are not always used for their full duration of efficacy. This study calculated that cost-savings from LARC methods relative to SARC methods, with discontinuation rates considered, can be realized within 3 years. PMID:25282161
Tan, Fa-Bing; Wang, Lu; Fu, Gang; Wu, Shu-Hong; Jin, Ping
2010-02-01
To study the effect of different optical impression methods in Cerec 3D/Inlab MC XL system on marginal and internal fit of all-ceramic crowns. A right mandibular first molar in the standard model was used to prepare full crown and replicated into thirty-two plaster casts. Sixteen of them were selected randomly for bonding crown and the others were used for taking optical impression, in half of which the direct optical impression taking method were used and the others were used for the indirect method, and then eight Cerec Blocs all-ceramic crowns were manufactured respectively. The fit of all-ceramic crowns were evaluated by modified United States Public Health Service (USPHS) criteria and scanning electron microscope (SEM) imaging, and the data were statistically analyzed with SAS 9.1 software. The clinically acceptable rate for all marginal measurement sites was 87.5% according to USPHS criteria. There was no statistically significant difference in marginal fit between direct and indirect method group (P > 0.05). With SEM imaging, all marginal measurement sites were less than 120 microm and no statistically significant difference was found between direct and indirect method group in terms of marginal or internal fit (P > 0.05). But the direct method group showed better fit than indirect method group in terms of mesial surface, lingual surface, buccal surface and occlusal surface (P < 0.05). The distal surface's fit was worse and the obvious difference was observed between mesial surface and distal surface in direct method group (P < 0.01). Under the conditions of this study, the optical impression method had no significant effect on marginal fit of Cerec Blocs crowns, but it had certain effect on internal fit. Overall all-ceramic crowns appeared to have clinically acceptable marginal fit.
Nonlinear aerodynamics of two-dimensional airfoils in severe maneuver
NASA Technical Reports Server (NTRS)
Scott, Matthew T.; Mccune, James E.
1988-01-01
This paper presents a nonlinear theory of forces and moment acting on a two-dimensional airfoil in unsteady potential flow. Results are obtained for cases of both large and small amplitude motion. The analysis, which is based on an extension of Wagner's integral equation to the nonlinear regime, takes full advantage of the trailing wake's tendency to deform under local velocities. Interactive computational results are presented that show examples of wake-induced lift and moment augmentation on the order of 20 percent of quasi-static values. The expandability and flexibility of the present computational method are noted, as well as the relative speed with which solutions are obtained.
NASA Technical Reports Server (NTRS)
Bernstein, R. B.; Labudde, R. A.
1972-01-01
The problem of inversion is considered in relation to absolute total cross sections Q(v) for atom-atom collisions and their velocity dependence, and the glory undulations and the transition to high velocity behavior. There is a limit to the amount of information available from Q(v) even when observations of good accuracy (e.g., + or - 0.25%) are in hand over an extended energy range (from thermal energies upward by a factor of greater than 1000 in relative kinetic energy). Methods were developed for data utilization, which take full advantage of the accuracy of the experimental Q(v) measurements.
2005-03-28
consequently users are torn between taking advantage of increasingly pervasive computing systems, and the price (in attention and skill) that they have to... advantage of the surrounding computing environments; and (c) that it is usable by non-experts. Second, from a software architect’s perspective, we...take full advantage of the computing systems accessible to them, much as they take advantage of the furniture in each physical space. In the example
NASA Technical Reports Server (NTRS)
Kerley, James J.; Eklund, Wayne; Crane, Alan
1992-01-01
Walker supports person with limited use of legs and back. Enables person to stand upright, move with minimum load, and rest at will taking weight off legs. Consists of wheeled frame with body harness connected compliantly to side structures. Harness supports wearer upright when wearer relaxes and takes weight off lower extremities. Assumes partial to full body weight at user's discretion.
Stereotypes and the Achievement Gap: Stereotype Threat Prior to Test Taking
ERIC Educational Resources Information Center
Appel, Markus; Kronberger, Nicole
2012-01-01
Stereotype threat is known as a situational predicament that prevents members of negatively stereotyped groups to perform up to their full ability. This review shows that the detrimental influence of stereotype threat goes beyond test taking: It impairs stereotyped students to build abilities in the first place. Guided by current theory on…
ERIC Educational Resources Information Center
Newton, Xiaoxia A.; Thompson, Shanna Rose; Oh, Bangsil; Ferullo, Leah
2017-01-01
This article describes the collective efforts educators and multiple community partners are taking to transform one alternative urban high school into a full-service community school. The article presents preliminary findings on the opportunities for bridging social capital that the full-service initiative has created and the impacts such…
Redefining Full-Time in College: Evidence on 15-Credit Strategies
ERIC Educational Resources Information Center
Klempin, Serena
2014-01-01
Because federal financial aid guidelines stipulate that students must be enrolled in a minimum of 12 credits per semester in order to receive the full amount of aid, many colleges and universities define full-time enrollment as 12 credits per semester. Yet, if a student takes only 12 credits each fall and spring term, it is impossible to complete…
Hyperspectral tomography based on multi-mode absorption spectroscopy (MUMAS)
NASA Astrophysics Data System (ADS)
Dai, Jinghang; O'Hagan, Seamus; Liu, Hecong; Cai, Weiwei; Ewart, Paul
2017-10-01
This paper demonstrates a hyperspectral tomographic technique that can recover the temperature and concentration field of gas flows based on multi-mode absorption spectroscopy (MUMAS). This method relies on the recently proposed concept of nonlinear tomography, which can take full advantage of the nonlinear dependency of MUMAS signals on temperature and enables 2D spatial resolution of MUMAS which is naturally a line-of-sight technique. The principles of MUMAS and nonlinear tomography, as well as the mathematical formulation of the inversion problem, are introduced. Proof-of-concept numerical demonstrations are presented using representative flame phantoms and assuming typical laser parameters. The results show that faithful reconstruction of temperature distribution is achievable when a signal-to-noise ratio of 20 is assumed. This method can potentially be extended to simultaneously reconstructing distributions of temperature and the concentration of multiple flame species.
Roessler, Christian G; Kuczewski, Anthony; Stearns, Richard; Ellson, Richard; Olechno, Joseph; Orville, Allen M; Allaire, Marc; Soares, Alexei S; Héroux, Annie
2013-09-01
To take full advantage of advanced data collection techniques and high beam flux at next-generation macromolecular crystallography beamlines, rapid and reliable methods will be needed to mount and align many samples per second. One approach is to use an acoustic ejector to eject crystal-containing droplets onto a solid X-ray transparent surface, which can then be positioned and rotated for data collection. Proof-of-concept experiments were conducted at the National Synchrotron Light Source on thermolysin crystals acoustically ejected onto a polyimide `conveyor belt'. Small wedges of data were collected on each crystal, and a complete dataset was assembled from a well diffracting subset of these crystals. Future developments and implementation will focus on achieving ejection and translation of single droplets at a rate of over one hundred per second.
Senior housing in Sweden: a new concept for aging in place.
Henning, Cecilia; Ahnby, Ulla; Osterstrom, Stefan
2009-01-01
Demographic projections of elder care in Sweden necessitate new and creative approaches to accommodate this rapidly growing population. This article describes a unique aging-in-place care and housing policy initiative for the elderly. Using a case example in Eksjo, Sweden, the authors used a future workshop (FW) method to help seniors plan their future housing in the community. The FW is based on a collective democratic process involving full participation, open communication, organizational development, and leadership. The process steps of the three-stage FW method are described. Results indicated that empowerment, collaboration, autonomy, social education, and decision making can be achieved in a community-network-based policy model. This demonstrates the devolution of national policy and how, at the grass roots level, local participation and public accountability can take root. Devolution created an opportunity for creatively addressing local needs.
NASA Astrophysics Data System (ADS)
Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.
2016-05-01
In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.
Full two-dimensional transient solutions of electrothermal aircraft blade deicing
NASA Technical Reports Server (NTRS)
Masiulaniec, K. C.; Keith, T. G., Jr.; Dewitt, K. J.; Leffel, K. L.
1985-01-01
Two finite difference methods are presented for the analysis of transient, two-dimensional responses of an electrothermal de-icer pad of an aircraft wing or blade with attached variable ice layer thickness. Both models employ a Crank-Nicholson iterative scheme, and use an enthalpy formulation to handle the phase change in the ice layer. The first technique makes use of a 'staircase' approach, fitting the irregular ice boundary with square computational cells. The second technique uses a body fitted coordinate transform, and maps the exact shape of the irregular boundary into a rectangular body, with uniformally square computational cells. The numerical solution takes place in the transformed plane. Initial results accounting for variable ice layer thickness are presented. Details of planned de-icing tests at NASA-Lewis, which will provide empirical verification for the above two methods, are also presented.
Extraction of Polarization Parameters in the p¯p → Ω¯Ω Reaction
NASA Astrophysics Data System (ADS)
Perotti, E.
2018-05-01
A method to extract the polarization of Ω hyperons produced via the strong interaction is presented. Assuming they are spin 3/2 particles, the corresponding spin density matrix can be written in terms of seven non-zero polarization parameters, all retrievable from the angular distribution of the decay products. Moreover by considering the full decay chain Ω → ΛK → pπK the magnitude of the asymmetry parameters β Ω and γ Ω can be obtained. This method, applied here to the specific Ω case, can be generalized to any weakly decaying hyperon and is perfectly suited for the PANDA experiment where hyperon-antihyperon pairs will be copiously produced in proton-antiproton collisions. The aim is to take a step forward towards the understanding of the mechanism that reigns strangeness production in these processes.
Roessler, Christian G.; Kuczewski, Anthony; Stearns, Richard; Ellson, Richard; Olechno, Joseph; Orville, Allen M.; Allaire, Marc; Soares, Alexei S.; Héroux, Annie
2013-01-01
To take full advantage of advanced data collection techniques and high beam flux at next-generation macromolecular crystallography beamlines, rapid and reliable methods will be needed to mount and align many samples per second. One approach is to use an acoustic ejector to eject crystal-containing droplets onto a solid X-ray transparent surface, which can then be positioned and rotated for data collection. Proof-of-concept experiments were conducted at the National Synchrotron Light Source on thermolysin crystals acoustically ejected onto a polyimide ‘conveyor belt’. Small wedges of data were collected on each crystal, and a complete dataset was assembled from a well diffracting subset of these crystals. Future developments and implementation will focus on achieving ejection and translation of single droplets at a rate of over one hundred per second. PMID:23955046
A class of reduced-order models in the theory of waves and stability.
Chapman, C J; Sorokin, S V
2016-02-01
This paper presents a class of approximations to a type of wave field for which the dispersion relation is transcendental. The approximations have two defining characteristics: (i) they give the field shape exactly when the frequency and wavenumber lie on a grid of points in the (frequency, wavenumber) plane and (ii) the approximate dispersion relations are polynomials that pass exactly through points on this grid. Thus, the method is interpolatory in nature, but the interpolation takes place in (frequency, wavenumber) space, rather than in physical space. Full details are presented for a non-trivial example, that of antisymmetric elastic waves in a layer. The method is related to partial fraction expansions and barycentric representations of functions. An asymptotic analysis is presented, involving Stirling's approximation to the psi function, and a logarithmic correction to the polynomial dispersion relation.
NASA Astrophysics Data System (ADS)
Pan, Zhen; Anderes, Ethan; Knox, Lloyd
2018-05-01
One of the major targets for next-generation cosmic microwave background (CMB) experiments is the detection of the primordial B-mode signal. Planning is under way for Stage-IV experiments that are projected to have instrumental noise small enough to make lensing and foregrounds the dominant source of uncertainty for estimating the tensor-to-scalar ratio r from polarization maps. This makes delensing a crucial part of future CMB polarization science. In this paper we present a likelihood method for estimating the tensor-to-scalar ratio r from CMB polarization observations, which combines the benefits of a full-scale likelihood approach with the tractability of the quadratic delensing technique. This method is a pixel space, all order likelihood analysis of the quadratic delensed B modes, and it essentially builds upon the quadratic delenser by taking into account all order lensing and pixel space anomalies. Its tractability relies on a crucial factorization of the pixel space covariance matrix of the polarization observations which allows one to compute the full Gaussian approximate likelihood profile, as a function of r , at the same computational cost of a single likelihood evaluation.
The Role Of Mergers In Galaxy Formation And Transformations
NASA Astrophysics Data System (ADS)
Conselice, Christopher J.; Mundy, Carl; Duncan, Kenneth
2017-06-01
Baryonic assembly of galaxies is one of the largest questions in extragalactic studies, which relates to many other issues, including environment, feedback, star formation, gas accretion and merging. In fact, all of these processes are related and must be accounted for and understood to paint a full picture of galaxy assembly. Perhaps the most straightforward of these processes to measure are the merging and star formation histories. I will present results of combining in a new reanalysis of the three deepest and large NIR surveys take to date: UDS, Ultra-VISTA and VIDEO as part of the REFINE project. Using consistently measured stellar masses and photometric redshifts for galaxies in these fields up to z =3, I will show how the major and minor merger rate can consistently be measured across these fields. Our new method involves a full use of the PDF for photo-zs and stellar masses. We show how the merger fraction and rate are lower than previous results and the implications for this for other methods of galaxy assembly and feedback mechanisms. Invited Talk presented at the conference Galaxy Evolution Across Time, 12-16 June, Paris, France
Zhao, Yong-guang; Ma, Ling-ling; Li, Chuan-rong; Zhu, Xiao-hua; Tang, Ling-li
2015-07-01
Due to the lack of enough spectral bands for multi-spectral sensor, it is difficult to reconstruct surface retlectance spectrum from finite spectral information acquired by multi-spectral instrument. Here, taking into full account of the heterogeneity of pixel from remote sensing image, a method is proposed to simulate hyperspectral data from multispectral data based on canopy radiation transfer model. This method first assumes the mixed pixels contain two types of land cover, i.e., vegetation and soil. The sensitive parameters of Soil-Leaf-Canopy (SLC) model and a soil ratio factor were retrieved from multi-spectral data based on Look-Up Table (LUT) technology. Then, by combined with a soil ratio factor, all the parameters were input into the SLC model to simulate the surface reflectance spectrum from 400 to 2 400 nm. Taking Landsat Enhanced Thematic Mapper Plus (ETM+) image as reference image, the surface reflectance spectrum was simulated. The simulated reflectance spectrum revealed different feature information of different surface types. To test the performance of this method, the simulated reflectance spectrum was convolved with the Landsat ETM + spectral response curves and Moderate Resolution Imaging Spectrometer (MODIS) spectral response curves to obtain the simulated Landsat ETM+ and MODIS image. Finally, the simulated Landsat ETM+ and MODIS images were compared with the observed Landsat ETM+ and MODIS images. The results generally showed high correction coefficients (Landsat: 0.90-0.99, MODIS: 0.74-0.85) between most simulated bands and observed bands and indicated that the simulated reflectance spectrum was well simulated and reliable.
Band structures in coupled-cluster singles-and-doubles Green's function (GFCCSD)
NASA Astrophysics Data System (ADS)
Furukawa, Yoritaka; Kosugi, Taichi; Nishi, Hirofumi; Matsushita, Yu-ichiro
2018-05-01
We demonstrate that the coupled-cluster singles-and-doubles Green's function (GFCCSD) method is a powerful and prominent tool drawing the electronic band structures and the total energies, which many theoretical techniques struggle to reproduce. We have calculated single-electron energy spectra via the GFCCSD method for various kinds of systems, ranging from ionic to covalent and van der Waals, for the first time: the one-dimensional LiH chain, one-dimensional C chain, and one-dimensional Be chain. We have found that the bandgap becomes narrower than in HF due to the correlation effect. We also show that the band structures obtained from the GFCCSD method include both quasiparticle and satellite peaks successfully. Besides, taking one-dimensional LiH as an example, we discuss the validity of restricting the active space to suppress the computational cost of the GFCCSD method. We show that the calculated results without bands that do not contribute to the chemical bonds are in good agreement with full-band calculations. With the GFCCSD method, we can calculate the total energies and spectral functions for periodic systems in an explicitly correlated manner.
Li, Wei; Wang, Jun; Yan, Zheng-Yu
2015-10-10
A novel simple, fast and efficient supercritical fluid chromatography (SFC) method was developed and compared with RPLC method for the separation and determination of impurities in rifampicin. The separation was performed using a packed diol column and a mobile phase B (modifier) consisting of methanol with 0.1% ammonium formate (w/v) and 2% water (v/v). Overall satisfactory resolutions and peak shapes for rifampicin quinone (RQ), rifampicin (RF), rifamycin SV (RSV), rifampicin N-oxide (RNO) and 3-formylrifamycinSV (3-FR) were obtained by optimization of the chromatography system. With gradient elution of mobile phase, all of the impurities and the active were separated within 4 min. Taking full advantage of features of SFC (such as particular selectivity, non-sloping baseline in gradient elution, and without injection solvent effects), the method was successfully used for determination of impurities in rifampicin, with more impurity peaks detected, better resolution achieved and much less analysis time needed compared with conventional reversed-phase liquid chromatography (RPLC) methods. Copyright © 2015 Elsevier B.V. All rights reserved.
A new diagnostic method of bolt loosening detection for thermal protection systems
NASA Astrophysics Data System (ADS)
Xie, Weihua; Meng, Songhe; Han, Jiecai; Du, Shanyi; Zhang, Boming; Yu, Dong
2009-07-01
Research and development efforts are underway to provide structural health monitoring systems to ensure the integrity of thermal protection system (TPS). An improved analytical method was proposed to assess the fastener integrity of a bolted structure in this paper. A new unsymmetrical washer was designed and fabricated, taking full advantage of piezoelectric ceramics (PZT) to play both roles as actuators and sensors, and using energy as the only extracted feature to identify abnormality. This diagnostic method is not restricted by the materials of the bracket, panel and base structure of the TPS whose condition is under inspection. A series of experiments on a metallic honeycomb sandwich panel were completed to demonstrate the capability of detecting bolt loosening on the TPS structure. Studies showed that this method can be used not only to identify the location of loosened bolts rapidly, but also to estimate the torque level of loosening bolts. Since that energy is the only extracted feature used to detect bolt loosening in this method, the diagnostic process become very simple and swift without sacrificing the accuracy of the results.
50 CFR 216.104 - Submission of requests.
Code of Federal Regulations, 2010 CFR
2010-10-01
... requested (i.e., takes by harassment only; takes by harassment, injury and/or death) and the method of... technological) of equipment, methods, and manner of conducting such activity or other means of effecting the... similar significance; (12) Where the proposed activity would take place in or near a traditional Arctic...
Big Data and High-Performance Computing in Global Seismology
NASA Astrophysics Data System (ADS)
Bozdag, Ebru; Lefebvre, Matthieu; Lei, Wenjie; Peter, Daniel; Smith, James; Komatitsch, Dimitri; Tromp, Jeroen
2014-05-01
Much of our knowledge of Earth's interior is based on seismic observations and measurements. Adjoint methods provide an efficient way of incorporating 3D full wave propagation in iterative seismic inversions to enhance tomographic images and thus our understanding of processes taking place inside the Earth. Our aim is to take adjoint tomography, which has been successfully applied to regional and continental scale problems, further to image the entire planet. This is one of the extreme imaging challenges in seismology, mainly due to the intense computational requirements and vast amount of high-quality seismic data that can potentially be assimilated. We have started low-resolution inversions (T > 30 s and T > 60 s for body and surface waves, respectively) with a limited data set (253 carefully selected earthquakes and seismic data from permanent and temporary networks) on Oak Ridge National Laboratory's Cray XK7 "Titan" system. Recent improvements in our 3D global wave propagation solvers, such as a GPU version of the SPECFEM3D_GLOBE package, will enable us perform higher-resolution (T > 9 s) and longer duration (~180 m) simulations to take the advantage of high-frequency body waves and major-arc surface waves, thereby improving imbalanced ray coverage as a result of the uneven global distribution of sources and receivers. Our ultimate goal is to use all earthquakes in the global CMT catalogue within the magnitude range of our interest and data from all available seismic networks. To take the full advantage of computational resources, we need a solid framework to manage big data sets during numerical simulations, pre-processing (i.e., data requests and quality checks, processing data, window selection, etc.) and post-processing (i.e., pre-conditioning and smoothing kernels, etc.). We address the bottlenecks in our global seismic workflow, which are mainly coming from heavy I/O traffic during simulations and the pre- and post-processing stages, by defining new data formats for seismograms and outputs of our 3D solvers (i.e., meshes, kernels, seismic models, etc.) based on ORNL's ADIOS libraries. We will discuss our global adjoint tomography workflow on HPC systems as well as the current status of our global inversions.
Chee, H; Rampal, K
2003-01-01
Aims: To determine the relation between sick leave and selected exposure variables among women semiconductor workers. Methods: This was a cross sectional survey of production workers from 18 semiconductor factories. Those selected had to be women, direct production operators up to the level of line leader, and Malaysian citizens. Sick leave and exposure to physical and chemical hazards were determined by self reporting. Three sick leave variables were used; number of sick leave days taken in the past year was the variable of interest in logistic regression models where the effects of age, marital status, work task, work schedule, work section, and duration of work in factory and work section were also explored. Results: Marital status was strongly linked to the taking of sick leave. Age, work schedule, and duration of work in the factory were significant confounders only in certain cases. After adjusting for these confounders, chemical and physical exposures, with the exception of poor ventilation and smelling chemicals, showed no significant relation to the taking of sick leave within the past year. Work section was a good predictor for taking sick leave, as wafer polishing workers faced higher odds of taking sick leave for each of the three cut off points of seven days, three days, and not at all, while parts assembly workers also faced significantly higher odds of taking sick leave. Conclusion: In Malaysia, the wafer fabrication factories only carry out a limited portion of the work processes, in particular, wafer polishing and the processes immediately prior to and following it. This study, in showing higher illness rates for workers in wafer polishing compared to semiconductor assembly, has implications for the governmental policy of encouraging the setting up of wafer fabrication plants with the full range of work processes. PMID:12660374
NASA Astrophysics Data System (ADS)
Pernía Leal, M.; Assali, M.; Cid, J. J.; Valdivia, V.; Franco, J. M.; Fernández, I.; Pozo, D.; Khiar, N.
2015-11-01
To take full advantage of the remarkable applications of carbon nanotubes in different fields, there is a need to develop effective methods to improve their water dispersion and biocompatibility while maintaining their physical properties. In this sense, current approaches suffer from serious drawbacks such as loss of electronic structure together with low surface coverage in the case of covalent functionalizations, or instability of the dynamic hybrids obtained by non-covalent functionalizations. In the present work, we examined the molecular basis of an original strategy that combines the advantages of both functionalizations without their main drawbacks. The hierarchical self-assembly of diacetylenic-based neoglycolipids into highly organized and compacted rings around the nanotubes, followed by photopolymerization leads to the formation of nanotubes covered with glyconanorings with a shish kebab-type topology exposing the carbohydrate ligands to the water phase in a multivalent fashion. The glyconanotubes obtained are fully functional, and able to establish specific interactions with their cognate receptors. In fact, by taking advantage of this selective binding, an easy method to sense lectins as a working model of toxin detection was developed based on a simple analysis of TEM images. Remarkably, different experimental settings to assess cell membrane integrity, cell growth kinetics and cell cycle demonstrated the cellular biocompatibility of the sugar-coated carbon nanotubes compared to pristine single-walled carbon nanotubes.To take full advantage of the remarkable applications of carbon nanotubes in different fields, there is a need to develop effective methods to improve their water dispersion and biocompatibility while maintaining their physical properties. In this sense, current approaches suffer from serious drawbacks such as loss of electronic structure together with low surface coverage in the case of covalent functionalizations, or instability of the dynamic hybrids obtained by non-covalent functionalizations. In the present work, we examined the molecular basis of an original strategy that combines the advantages of both functionalizations without their main drawbacks. The hierarchical self-assembly of diacetylenic-based neoglycolipids into highly organized and compacted rings around the nanotubes, followed by photopolymerization leads to the formation of nanotubes covered with glyconanorings with a shish kebab-type topology exposing the carbohydrate ligands to the water phase in a multivalent fashion. The glyconanotubes obtained are fully functional, and able to establish specific interactions with their cognate receptors. In fact, by taking advantage of this selective binding, an easy method to sense lectins as a working model of toxin detection was developed based on a simple analysis of TEM images. Remarkably, different experimental settings to assess cell membrane integrity, cell growth kinetics and cell cycle demonstrated the cellular biocompatibility of the sugar-coated carbon nanotubes compared to pristine single-walled carbon nanotubes. Electronic supplementary information (ESI) available: Experimental procedures for the synthesis of compounds 12-10, 12-15, 17-20, 22-25, 27-30, NMR spectra, and additional TEM images. See DOI: 10.1039/c5nr05956a
NASA Technical Reports Server (NTRS)
Schade, Robert O.; Smith, Charles C., Jr.; Lovell, P. M., Jr.
1954-01-01
An experimental investigation has been conducted to determine the stability and control characteristics of a 0.13-scale free-flight model of the Convair XFY-1 airplane during take-offs and landings in steady winds. The tests indicated that take-offs in headwinds up to at least 20 knots (full scale) will be fairly easy to perform although the airplane may be blown downstream as much as 3 spans before a trim condition can be established. The distance that the airplane will be blown down-stream can be reduced by restraining the upwind landing gear until the instant of take-off. The tests also indicated that spot landings in headwinds up to at least 30 knots (full scale) and in crosswinds up to at least 20 knots (full scale) can be accomplished with reasonable accuracy although, during the landing approach, there will probably be an undesirable nosing-up tendency caused by ground effect and by the change in angle of attack resulting from vertical descent. Some form of arresting gear will probably be required to prevent the airplane from rolling downwind or tipping over after contact. This rolling and tipping can be prevented by a snubbing line attached to the tip of the upwind' wing or tail or by an arresting gear consisting of a wire mesh on the ground and hooks on the landing gear to engage the mesh.
NASA Technical Reports Server (NTRS)
Dittmar, James H.
1989-01-01
The noise of advanced high speed propeller models measured in the NASA 8- by 6-foot wind tunnel has been compared with model propeller noise measured in another tunnel and with full-scale propeller noise measured in flight. Good agreement was obtained for the noise of a model counterrotation propeller tested in the 8- by 6-foot wind tunnel and in the acoustically treated test section of the Boeing Transonic Wind Tunnel. This good agreement indicates the relative validity of taking cruise noise data on a plate in the 8- by 6-foot wind tunnel compared with the free-field method in the Boeing tunnel. Good agreement was also obtained for both single rotation and counter-rotation model noise comparisons with full-scale propeller noise in flight. The good scale model to full-scale comparisons indicate both the validity of the 8- by 6-foot wind tunnel data and the ability to scale to full size. Boundary layer refraction on the plate provides a limitation to the measurement of forward arc noise in the 8- by 6-foot wind tunnel at the higher harmonics of the blade passing tone. The use of a validated boundary layer refraction model to adjust the data could remove this limitation.
NASA Technical Reports Server (NTRS)
Dittmar, James
1989-01-01
The noise of advanced high speed propeller models measured in the NASA 8- by 6-foot wind tunnel has been compared with model propeller noise measured in another tunnel and with full-scale propeller noise measured in flight. Good agreement was obtained for the noise of a model counterrotation propeller tested in the 8- by 6-foot wind tunnel and in the acoustically treated test section of the Boeing Transonic Wind Tunnel. This good agreement indicates the relative validity of taking cruise noise data on a plate in the 8- by 6-foot wind tunnel compared with the free-field method in the Boeing tunnel. Good agreement was also obtained for both single rotation and counter-rotation model noise comparisons with full-scale propeller noise in flight. The good scale model to full-scale comparisons indicate both the validity of the 8- by 6-foot wind tunnel data and the ability to scale to full size. Boundary layer refraction on the plate provides a limitation to the measurement of forward arc noise in the 8- by 6-foot wind tunnel at the higher harmonics of the blade passing tone. The sue of a validated boundary layer refraction model to adjust the data could remove this limitation.
Take a Hike Event Shows Employees the Benefits of Walking | Poster
Occupational Health Services (OHS) got people excited to walk at the summer Take a Hike event with fun prizes, free food, and a chance to win the grand prize—a Cigna-branded duffel bag full of items such as an exercise band, sport bottle, body mass index (BMI) pedometer, lip balm, and sunscreen.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-08
... phocoena). KABATA has not requested authorization for incidental take by injury (Level A harassment), serious injury or mortality. Specified Activities KABATA is proposing to construct a new bridge spanning... sides of the Arm that will run adjacent to the water's edge to varying degrees. A full description of...
Optimal Consumption When Consumption Takes Time
ERIC Educational Resources Information Center
Miller, Norman C.
2009-01-01
A classic article by Gary Becker (1965) showed that when it takes time to consume, the first order conditions for optimal consumption require the marginal rate of substitution between any two goods to equal their relative full costs. These include the direct money price and the money value of the time needed to consume each good. This important…
ERIC Educational Resources Information Center
Brown, Marshall A.
2013-01-01
Today's work world is full of uncertainty. Every day, people hear about another organization going out of business, downsizing, or rightsizing. To prepare for these uncertain times, one must take charge of their own career. This article presents some tips for surviving in today's world of work: (1) Be self-managing; (2) Know what you…
Kaess, Michael; Fischer-Waldschmidt, Gloria; Resch, Franz; Koenig, Julian
2017-01-01
Diagnostic standards do not acknowledge developmental specifics and differences in the clinical presentation of adolescents with borderline personality disorder (BPD). BPD is associated with severe impairments in health related quality of life (HRQoL) and increased psychopathological distress. Previously no study addressed differences in HRQoL and psychopathology in adolescents with subthreshold and full-syndrome BPD as well as adolescents at-risk for the development but no current BPD. Drawing on data from a consecutive sample of N = 264 adolescents (12-17 years) presenting with risk-taking and self-harming behavior at a specialized outpatient clinic, we investigated differences in HRQoL (KIDSCREEN-52) and psychopathological distress (SCL-90-R) comparing adolescents with no BPD (less than 3 criteria fulfilled), to those with subthreshold (3-4 BPD criteria) and full-syndrome BPD (5 or more BPD criteria). Group differences were analyzed using one-way analysis of variance with Sidak corrected contrasts or Chi-Square test for categorical variables. Adolescents with subthreshold and full-syndrome BPD presented one year later at our clinic and were more likely female. Adolescents with subthreshold and full-syndrome BPD showed greater Axis-I and Axis-II comorbidity compared to adolescents with no BPD, and reported greater risk-taking behaviour, self-injury and suicidality. Compared to those without BPD, adolescents with subthreshold and full-syndrome BPD reported significantly reduced HRQoL. Adolescents with sub-threshold BPD and those with full-syndrome BPD did not differ on any HRQoL dimension, with the exception of Self-Perception . Similar, groups with sub-threshold and full-syndrome BPD showed no significant differences on any dimension of self-reported psychopathological distress, with the exception of Hostility . Findings highlight that subthreshold BPD in adolescents is associated with impairments in HRQoL and psychopathological distress comparable to full-syndrome BPD. Findings raise awareness on the importance of early detection and question the diagnostic validity and clinical utility of existing cut-offs. Findings support a lower diagnostic cut-off for adolescent BPD, to identify those at-risk at an early stage.
Workplace mavericks: how personality and risk-taking propensity predicts maverickism.
Gardiner, Elliroma; Jackson, Chris J
2012-11-01
We examine the relationship between lateral preference, the Five-Factor Model of personality, risk-taking propensity, and maverickism. We take an original approach by narrowing our research focus to only functional aspects of maverickism. Results with 458 full-time workers identify lateral preference as a moderator of the neuroticism-maverickism relationship. Extraversion, openness to experience, and low agreeableness were also each found to predict maverickism. The propensity of individuals high in maverickism to take risks was also found to be unaffected by task feedback. Our results highlight the multifaceted nature of maverickism, identifying both personality and task conditions as determinants of this construct. ©2011 The British Psychological Society.
Using Friends as Sensors to Detect Global-Scale Contagious Outbreaks
Garcia-Herranz, Manuel; Moro, Esteban; Cebrian, Manuel; Christakis, Nicholas A.; Fowler, James H.
2014-01-01
Recent research has focused on the monitoring of global–scale online data for improved detection of epidemics, mood patterns, movements in the stock market political revolutions, box-office revenues, consumer behaviour and many other important phenomena. However, privacy considerations and the sheer scale of data available online are quickly making global monitoring infeasible, and existing methods do not take full advantage of local network structure to identify key nodes for monitoring. Here, we develop a model of the contagious spread of information in a global-scale, publicly-articulated social network and show that a simple method can yield not just early detection, but advance warning of contagious outbreaks. In this method, we randomly choose a small fraction of nodes in the network and then we randomly choose a friend of each node to include in a group for local monitoring. Using six months of data from most of the full Twittersphere, we show that this friend group is more central in the network and it helps us to detect viral outbreaks of the use of novel hashtags about 7 days earlier than we could with an equal-sized randomly chosen group. Moreover, the method actually works better than expected due to network structure alone because highly central actors are both more active and exhibit increased diversity in the information they transmit to others. These results suggest that local monitoring is not just more efficient, but also more effective, and it may be applied to monitor contagious processes in global–scale networks. PMID:24718030
Using friends as sensors to detect global-scale contagious outbreaks.
Garcia-Herranz, Manuel; Moro, Esteban; Cebrian, Manuel; Christakis, Nicholas A; Fowler, James H
2014-01-01
Recent research has focused on the monitoring of global-scale online data for improved detection of epidemics, mood patterns, movements in the stock market political revolutions, box-office revenues, consumer behaviour and many other important phenomena. However, privacy considerations and the sheer scale of data available online are quickly making global monitoring infeasible, and existing methods do not take full advantage of local network structure to identify key nodes for monitoring. Here, we develop a model of the contagious spread of information in a global-scale, publicly-articulated social network and show that a simple method can yield not just early detection, but advance warning of contagious outbreaks. In this method, we randomly choose a small fraction of nodes in the network and then we randomly choose a friend of each node to include in a group for local monitoring. Using six months of data from most of the full Twittersphere, we show that this friend group is more central in the network and it helps us to detect viral outbreaks of the use of novel hashtags about 7 days earlier than we could with an equal-sized randomly chosen group. Moreover, the method actually works better than expected due to network structure alone because highly central actors are both more active and exhibit increased diversity in the information they transmit to others. These results suggest that local monitoring is not just more efficient, but also more effective, and it may be applied to monitor contagious processes in global-scale networks.
Metrology for Industry for use in the Manufacture of Grazing Incidence Beam Line Mirrors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metz, James P.; Parks, Robert E.
2014-12-01
The goal of this SBIR was to determine the slope sensitivity of Specular Reflection Deflectometry (SRD) and whether shearing methods had the sensitivity to be able to separate errors in the test equipment from slope error in the unit under test (UUT), or mirror. After many variations of test parameters it does not appear that SRD yields results much better than 1 μ radian RMS independent of how much averaging is done. Of course, a single number slope sensitivity over the full range of spatial scales is not a very insightful number in the same sense as a single numbermore » phase or height RMS value in interferometry does not tell the full story. However, the 1 μ radian RMS number is meaningful when contrasted with a sensitivity goal of better than 0.1 μ radian RMS. Shearing is a time proven method of separating the errors in a measurement from the actual shape of a UUT. It is accomplished by taking multiple measurements while moving the UUT relative to the test instrument. This process makes it possible to separate the two errors sources but only to a sensitivity of about 1 μ radian RMS. Another aspect of our conclusions is that this limit probably holds largely independent of the spatial scale of the test equipment. In the proposal for this work it was suggested that a test screen the full size of the UUT could be used to determine the slopes on scales of maybe 0.01 to full scale of the UUT while smaller screens and shorter focal length lenses could be used to measure shorter, or smaller, patches of slope. What we failed to take into consideration was that as the scale of the test equipment got smaller so too did the optical lever arm on which the slope was calculated. Although we did not do a test with a shorter focal length lens over a smaller sample area it is hard to argue with the logic that the slope sensitivity will be about the same independent of the spatial scale of the measurement assuming the test equipment is similarly scaled. On a more positive note, SRD does appear to be a highly flexible, easy to implement, rather inexpensive test for free form optics that require a dynamic range that exceeds that of interferometry. These optics are quite often specified to have more relaxed slope errors, on the order of 1 μ radian RMS or greater. It would be shortsighted to not recognize the value of this test method in the bigger picture.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-30
... subsistence uses (where relevant), and if the permissible methods of taking and requirements pertaining to the... to do so. All concrete piles would be removed via pneumatic chipping or similar method. All steel... strategic deterrence mission, the Navy Strategic Systems Programs directs research, development...
Jackowski, Konrad; Krawczyk, Bartosz; Woźniak, Michał
2014-05-01
Currently, methods of combined classification are the focus of intense research. A properly designed group of combined classifiers exploiting knowledge gathered in a pool of elementary classifiers can successfully outperform a single classifier. There are two essential issues to consider when creating combined classifiers: how to establish the most comprehensive pool and how to design a fusion model that allows for taking full advantage of the collected knowledge. In this work, we address the issues and propose an AdaSS+, training algorithm dedicated for the compound classifier system that effectively exploits local specialization of the elementary classifiers. An effective training procedure consists of two phases. The first phase detects the classifier competencies and adjusts the respective fusion parameters. The second phase boosts classification accuracy by elevating the degree of local specialization. The quality of the proposed algorithms are evaluated on the basis of a wide range of computer experiments that show that AdaSS+ can outperform the original method and several reference classifiers.
Matrix decomposition graphics processing unit solver for Poisson image editing
NASA Astrophysics Data System (ADS)
Lei, Zhao; Wei, Li
2012-10-01
In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.
Basic research for the geodynamics program
NASA Technical Reports Server (NTRS)
1983-01-01
Laser systems deployed in satellite tracking were upgraded to accuracy levels where biases from systematic unmodelled effects constitute the basic factor that prohibits extraction of the full amount of information contained in the observations. Taking into consideration that the quality of the instrument advances at a faster pace compared to the understanding and modeling of the physical processes involved, one can foresee that in the near future when all lasers are replaced with third generation ones the limiting factor for the estimated accuracies will be the aforementioned biases. Therefore, for the reduction of the observations, methods should be deployed in such a way that the effect of the biases will be kept well below the noise level. Such a method was proposed and studied. This method consists of using the observed part of the satellite pass and converting the laser ranges into range differences in hopes that they will be less affected by biases in the orbital models, the reference system, and the observations themselves.
Quadtree of TIN: a new algorithm of dynamic LOD
NASA Astrophysics Data System (ADS)
Zhang, Junfeng; Fei, Lifan; Chen, Zhen
2009-10-01
Currently, Real-time visualization of large-scale digital elevation model mainly employs the regular structure of GRID based on quadtree and triangle simplification methods based on irregular triangulated network (TIN). TIN is a refined means to express the terrain surface in the computer science, compared with GRID. However, the data structure of TIN model is complex, and is difficult to realize view-dependence representation of level of detail (LOD) quickly. GRID is a simple method to realize the LOD of terrain, but contains more triangle count. A new algorithm, which takes full advantage of the two methods' merit, is presented in this paper. This algorithm combines TIN with quadtree structure to realize the view-dependence LOD controlling over the irregular sampling point sets, and holds the details through the distance of viewpoint and the geometric error of terrain. Experiments indicate that this approach can generate an efficient quadtree triangulation hierarchy over any irregular sampling point sets and achieve dynamic and visual multi-resolution performance of large-scale terrain at real-time.
NASA Astrophysics Data System (ADS)
Fosas de Pando, Miguel; Schmid, Peter J.; Sipp, Denis
2016-11-01
Nonlinear model reduction for large-scale flows is an essential component in many fluid applications such as flow control, optimization, parameter space exploration and statistical analysis. In this article, we generalize the POD-DEIM method, introduced by Chaturantabut & Sorensen [1], to address nonlocal nonlinearities in the equations without loss of performance or efficiency. The nonlinear terms are represented by nested DEIM-approximations using multiple expansion bases based on the Proper Orthogonal Decomposition. These extensions are imperative, for example, for applications of the POD-DEIM method to large-scale compressible flows. The efficient implementation of the presented model-reduction technique follows our earlier work [2] on linearized and adjoint analyses and takes advantage of the modular structure of our compressible flow solver. The efficacy of the nonlinear model-reduction technique is demonstrated to the flow around an airfoil and its acoustic footprint. We could obtain an accurate and robust low-dimensional model that captures the main features of the full flow.
Probabilistic dual heuristic programming-based adaptive critic
NASA Astrophysics Data System (ADS)
Herzallah, Randa
2010-02-01
Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.
A Hybrid Key Management Scheme for WSNs Based on PPBR and a Tree-Based Path Key Establishment Method
Zhang, Ying; Liang, Jixing; Zheng, Bingxin; Chen, Wei
2016-01-01
With the development of wireless sensor networks (WSNs), in most application scenarios traditional WSNs with static sink nodes will be gradually replaced by Mobile Sinks (MSs), and the corresponding application requires a secure communication environment. Current key management researches pay less attention to the security of sensor networks with MS. This paper proposes a hybrid key management schemes based on a Polynomial Pool-based key pre-distribution and Basic Random key pre-distribution (PPBR) to be used in WSNs with MS. The scheme takes full advantages of these two kinds of methods to improve the cracking difficulty of the key system. The storage effectiveness and the network resilience can be significantly enhanced as well. The tree-based path key establishment method is introduced to effectively solve the problem of communication link connectivity. Simulation clearly shows that the proposed scheme performs better in terms of network resilience, connectivity and storage effectiveness compared to other widely used schemes. PMID:27070624
Community Schools: It Takes a Village
ERIC Educational Resources Information Center
Garrett, Kristi
2012-01-01
Lately educators are hearing more about full-service community schools, which pair schools with other community resources in pursuit of the long-term goal of improving academic performance. (These full-service schools are differentiated from the community day schools that serve expelled students.) The focus on academics is what makes today's…
NASA Astrophysics Data System (ADS)
Zhou, Shuai; Huang, Danian
2015-11-01
We have developed a new method for the interpretation of gravity tensor data based on the generalized Tilt-depth method. Cooper (2011, 2012) extended the magnetic Tilt-depth method to gravity data. We take the gradient-ratio method of Cooper (2011, 2012) and modify it so that the source type does not need to be specified a priori. We develop the new method by generalizing the Tilt-depth method for depth estimation for different types of source bodies. The new technique uses only the three vertical tensor components of the full gravity tensor data observed or calculated at different height plane to estimate the depth of the buried bodies without a priori specification of their structural index. For severely noise-corrupted data, our method utilizes different upward continuation height data, which can effectively reduce the influence of noise. Theoretical simulations of the gravity source model with and without noise illustrate the ability of the method to provide source depth information. Additionally, the simulations demonstrate that the new method is simple, computationally fast and accurate. Finally, we apply the method using the gravity data acquired over the Humble Salt Dome in the USA as an example. The results show a good correspondence to the previous drilling and seismic interpretation results.
Full Gradient Solution to Adaptive Hybrid Control
NASA Technical Reports Server (NTRS)
Bean, Jacob; Schiller, Noah H.; Fuller, Chris
2017-01-01
This paper focuses on the adaptation mechanisms in adaptive hybrid controllers. Most adaptive hybrid controllers update two filters individually according to the filtered reference least mean squares (FxLMS) algorithm. Because this algorithm was derived for feedforward control, it does not take into account the presence of a feedback loop in the gradient calculation. This paper provides a derivation of the proper weight vector gradient for hybrid (or feedback) controllers that takes into account the presence of feedback. In this formulation, a single weight vector is updated rather than two individually. An internal model structure is assumed for the feedback part of the controller. The full gradient is equivalent to that used in the standard FxLMS algorithm with the addition of a recursive term that is a function of the modeling error. Some simulations are provided to highlight the advantages of using the full gradient in the weight vector update rather than the approximation.
Full Gradient Solution to Adaptive Hybrid Control
NASA Technical Reports Server (NTRS)
Bean, Jacob; Schiller, Noah H.; Fuller, Chris
2016-01-01
This paper focuses on the adaptation mechanisms in adaptive hybrid controllers. Most adaptive hybrid controllers update two filters individually according to the filtered-reference least mean squares (FxLMS) algorithm. Because this algorithm was derived for feedforward control, it does not take into account the presence of a feedback loop in the gradient calculation. This paper provides a derivation of the proper weight vector gradient for hybrid (or feedback) controllers that takes into account the presence of feedback. In this formulation, a single weight vector is updated rather than two individually. An internal model structure is assumed for the feedback part of the controller. The full gradient is equivalent to that used in the standard FxLMS algorithm with the addition of a recursive term that is a function of the modeling error. Some simulations are provided to highlight the advantages of using the full gradient in the weight vector update rather than the approximation.
Science and Technology Review December 2006
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radousky, H B
2006-10-30
This month's issue has the following articles: (1) Livermore's Biosecurity Research Directly Benefits Public Health--Commentary by Raymond J. Juzaitis; (2) Diagnosing Flu Fast--Livermore's FluIDx device can diagnose flu and four other respiratory viruses in just two hours; (3) An Action Plan to Reopen a Contaminated Airport--New planning tools and faster sample analysis methods will hasten restoration of a major airport to full use following a bioterrorist attack; (4) Early Detection of Bone Disease--A Livermore technique detects small changes in skeletal calcium balance that may signal bone disease; and (5) Taking a Gander with Gamma Rays--Gamma rays may be the nextmore » source for looking deep inside the atom.« less
Communication enabled – fast acting imbalance reserve (CE-FAIR)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilches-Bernal, Felipe; Concepcion, Ricky; Neely, Jason C.
This letter presents a new frequency control strategy that takes advantage of communications and fast responding resources like PV generation, energy storage, wind generation, and demand response, termed collectively as converter interfaced generators (CIGs). The proposed approach uses an active monitoring of power imbalances to rapidly redispatch CIGs. This approach differs from previously proposed frequency control schemes in that it employs feed-forward control based on a measured power imbalance rather that relying on a frequency measurement. As a result, time-domain simulations of the full Western Electricity Coordinating Council (WECC) system are conducted to demonstrate the effectiveness of the proposed method,more » showing improved performance.« less
Semistrict higher gauge theory
NASA Astrophysics Data System (ADS)
Jurčo, Branislav; Sämann, Christian; Wolf, Martin
2015-04-01
We develop semistrict higher gauge theory from first principles. In particular, we describe the differential Deligne cohomology underlying semistrict principal 2-bundles with connective structures. Principal 2-bundles are obtained in terms of weak 2-functors from the Čech groupoid to weak Lie 2-groups. As is demonstrated, some of these Lie 2-groups can be differentiated to semistrict Lie 2-algebras by a method due to Ševera. We further derive the full description of connective structures on semistrict principal 2-bundles including the non-linear gauge transformations. As an application, we use a twistor construction to derive superconformal constraint equations in six dimensions for a non-Abelian tensor multiplet taking values in a semistrict Lie 2-algebra.
Dimensional Reduction for the General Markov Model on Phylogenetic Trees.
Sumner, Jeremy G
2017-03-01
We present a method of dimensional reduction for the general Markov model of sequence evolution on a phylogenetic tree. We show that taking certain linear combinations of the associated random variables (site pattern counts) reduces the dimensionality of the model from exponential in the number of extant taxa, to quadratic in the number of taxa, while retaining the ability to statistically identify phylogenetic divergence events. A key feature is the identification of an invariant subspace which depends only bilinearly on the model parameters, in contrast to the usual multi-linear dependence in the full space. We discuss potential applications including the computation of split (edge) weights on phylogenetic trees from observed sequence data.
NASA Astrophysics Data System (ADS)
He, L.-C.; Diao, L.-J.; Sun, B.-H.; Zhu, L.-H.; Zhao, J.-W.; Wang, M.; Wang, K.
2018-02-01
A Monte Carlo method based on the GEANT4 toolkit has been developed to correct the full-energy peak (FEP) efficiencies of a high purity germanium (HPGe) detector equipped with a low background shielding system, and moreover evaluated using summing peaks in a numerical way. It is found that the FEP efficiencies of 60Co, 133Ba and 152Eu can be improved up to 18% by taking the calculated true summing coincidence factors (TSCFs) correction into account. Counts of summing coincidence γ peaks in the spectrum of 152Eu can be well reproduced using the corrected efficiency curve within an accuracy of 3%.
Bugris, Valéria; Haspel, Henrik; Kukovecz, Ákos; Kónya, Zoltán; Sipiczki, Mónika; Sipos, Pál; Pálinkó, István
2013-10-29
Heat-treated CaFe-layered double hydroxide samples were equilibrated under conditions of various relative humidities (11%, 43% and 75%). Measurements by FT-IR and dielectric relaxation spectroscopies revealed that partial to full reconstruction of the layered structure took place. Water types taking part in the reconstruction process were identified via dielectric relaxation measurements either at 298 K or on the flash-cooled (to 155 K) samples. The dynamics of water molecules at the various positions was also studied by this method, allowing the flash-cooled samples to warm up to 298 K.
High temperature materials characterization
NASA Technical Reports Server (NTRS)
Workman, Gary L.
1990-01-01
A lab facility for measuring elastic moduli up to 1700 C was constructed and delivered. It was shown that the ultrasonic method can be used to determine elastic constants of materials from room temperature to their melting points. The ease in coupling high frequency acoustic energy is still a difficult task. Even now, new coupling materials and higher power ultrasonic pulsers are being suggested. The surface was only scratched in terms of showing the full capabilities of either technique used, especially since there is such a large learning curve in developing proper methodologies to take measurements into the high temperature region. The laser acoustic system does not seem to have sufficient precision at this time to replace the normal buffer rod methodology.
Communication enabled – fast acting imbalance reserve (CE-FAIR)
Wilches-Bernal, Felipe; Concepcion, Ricky; Neely, Jason C.; ...
2017-06-08
This letter presents a new frequency control strategy that takes advantage of communications and fast responding resources like PV generation, energy storage, wind generation, and demand response, termed collectively as converter interfaced generators (CIGs). The proposed approach uses an active monitoring of power imbalances to rapidly redispatch CIGs. This approach differs from previously proposed frequency control schemes in that it employs feed-forward control based on a measured power imbalance rather that relying on a frequency measurement. As a result, time-domain simulations of the full Western Electricity Coordinating Council (WECC) system are conducted to demonstrate the effectiveness of the proposed method,more » showing improved performance.« less
Ab initio Eliashberg Theory: Making Genuine Predictions of Superconducting Features
NASA Astrophysics Data System (ADS)
Sanna, Antonio; Flores-Livas, José A.; Davydov, Arkadiy; Profeta, Gianni; Dewhurst, Kay; Sharma, Sangeeta; Gross, E. K. U.
2018-04-01
We present an application of Eliashberg theory of superconductivity to study a set of novel superconducting systems with a wide range of structural and chemical properties. The set includes three intercalated group-IV honeycomb layered structures, SH3 at 200 GPa (the superconductor with the highest measured critical temperature), the similar system SeH3 at 150 GPa, and a lithium doped mono-layer of black phosphorus. The theoretical approach we adopt is a recently developed, fully ab initio Eliashberg approach that takes into account the Coulomb interaction in a full energy-resolved fashion avoiding any free parameters like μ*. This method provides reasonable estimations of superconducting properties, including TC and the excitation spectra of superconductors.
Fitting of the Thomson scattering density and temperature profiles on the COMPASS tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stefanikova, E.; Division of Fusion Plasma Physics, KTH Royal Institute of Technology, SE-10691 Stockholm; Peterka, M.
2016-11-15
A new technique for fitting the full radial profiles of electron density and temperature obtained by the Thomson scattering diagnostic in H-mode discharges on the COMPASS tokamak is described. The technique combines the conventionally used modified hyperbolic tangent function for the edge transport barrier (pedestal) fitting and a modification of a Gaussian function for fitting the core plasma. Low number of parameters of this combined function and their straightforward interpretability and controllability provide a robust method for obtaining physically reasonable profile fits. Deconvolution with the diagnostic instrument function is applied on the profile fit, taking into account the dependence onmore » the actual magnetic configuration.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-21
... availability of the species or stock(s) for subsistence uses (where relevant). Further, the permissible methods... (ITA) under section 101(a)(5)(D) of the MMPA, we must set forth the permissible methods of taking... basis of predicted distances to relevant thresholds in post-processing of observational and acoustic...
NASA Astrophysics Data System (ADS)
Juno, J.; Hakim, A.; TenBarge, J.; Dorland, W.
2015-12-01
We present for the first time results for the turbulence dissipation challenge, with specific focus on the linear wave portion of the challenge, using a variety of continuum kinetic models: hybrid Vlasov-Maxwell, gyrokinetic, and full Vlasov-Maxwell. As one of the goals of the wave problem as it is outlined is to identify how well various models capture linear physics, we compare our results to linear Vlasov and gyrokinetic theory. Preliminary gyrokinetic results match linear theory extremely well due to the geometry of the problem, which eliminates the dominant nonlinearity. With the non-reduced models, we explore how the subdominant nonlinearities manifest and affect the evolution of the turbulence and the energy budget. We also take advantage of employing continuum methods to study the dynamics of the distribution function, with particular emphasis on the full Vlasov results where a basic collision operator has been implemented. As the community prepares for the next stage of the turbulence dissipation challenge, where we hope to do large 3D simulations to inform the next generation of observational missions such as THOR (Turbulence Heating ObserveR), we argue for the consideration of hybrid Vlasov and full Vlasov as candidate models for these critical simulations. With the use of modern numerical algorithms, we demonstrate the competitiveness of our code with traditional particle-in-cell algorithms, with a clear plan for continued improvements and optimizations to further strengthen the code's viability as an option for the next stage of the challenge.
Refraction of dispersive shock waves
NASA Astrophysics Data System (ADS)
El, G. A.; Khodorovskii, V. V.; Leszczyszyn, A. M.
2012-09-01
We study a dispersive counterpart of the classical gas dynamics problem of the interaction of a shock wave with a counter-propagating simple rarefaction wave, often referred to as the shock wave refraction. The refraction of a one-dimensional dispersive shock wave (DSW) due to its head-on collision with the centred rarefaction wave (RW) is considered in the framework of the defocusing nonlinear Schrödinger (NLS) equation. For the integrable cubic nonlinearity case we present a full asymptotic description of the DSW refraction by constructing appropriate exact solutions of the Whitham modulation equations in Riemann invariants. For the NLS equation with saturable nonlinearity, whose modulation system does not possess Riemann invariants, we take advantage of the recently developed method for the DSW description in non-integrable dispersive systems to obtain main physical parameters of the DSW refraction. The key features of the DSW-RW interaction predicted by our modulation theory analysis are confirmed by direct numerical solutions of the full dispersive problem.
Take the "Ow!" Out of Taxes Now: How to Plan for and Increase Your Medical Deductions
ERIC Educational Resources Information Center
Medisky, Shannon M.
2009-01-01
Each year countless taxpayers overpay simply because they're not taking full advantage of medical deductions. Individuals with disabilities are especially at risk. Time and energy spent running around to doctor visits, therapy sessions, and the like can leave little left to spend on preparing taxes. Fortunately, with a little effort year round and…
Introducing a Model for Optimal Design of Sequential Objective Structured Clinical Examinations
ERIC Educational Resources Information Center
Mortaz Hejri, Sara; Yazdani, Kamran; Labaf, Ali; Norcini, John J.; Jalili, Mohammad
2016-01-01
In a sequential OSCE which has been suggested to reduce testing costs, candidates take a short screening test and who fail the test, are asked to take the full OSCE. In order to introduce an effective and accurate sequential design, we developed a model for designing and evaluating screening OSCEs. Based on two datasets from a 10-station…
Stacul, Stefano; Squeglia, Nunziante
2018-02-15
A Boundary Element Method (BEM) approach was developed for the analysis of pile groups. The proposed method includes: the non-linear behavior of the soil by a hyperbolic modulus reduction curve; the non-linear response of reinforced concrete pile sections, also taking into account the influence of tension stiffening; the influence of suction by increasing the stiffness of shallow portions of soil and modeled using the Modified Kovacs model; pile group shadowing effect, modeled using an approach similar to that proposed in the Strain Wedge Model for pile groups analyses. The proposed BEM method saves computational effort compared to more sophisticated codes such as VERSAT-P3D, PLAXIS 3D and FLAC-3D, and provides reliable results using input data from a standard site investigation. The reliability of this method was verified by comparing results from data from full scale and centrifuge tests on single piles and pile groups. A comparison is presented between measured and computed data on a laterally loaded fixed-head pile group composed by reinforced concrete bored piles. The results of the proposed method are shown to be in good agreement with those obtained in situ.
2018-01-01
A Boundary Element Method (BEM) approach was developed for the analysis of pile groups. The proposed method includes: the non-linear behavior of the soil by a hyperbolic modulus reduction curve; the non-linear response of reinforced concrete pile sections, also taking into account the influence of tension stiffening; the influence of suction by increasing the stiffness of shallow portions of soil and modeled using the Modified Kovacs model; pile group shadowing effect, modeled using an approach similar to that proposed in the Strain Wedge Model for pile groups analyses. The proposed BEM method saves computational effort compared to more sophisticated codes such as VERSAT-P3D, PLAXIS 3D and FLAC-3D, and provides reliable results using input data from a standard site investigation. The reliability of this method was verified by comparing results from data from full scale and centrifuge tests on single piles and pile groups. A comparison is presented between measured and computed data on a laterally loaded fixed-head pile group composed by reinforced concrete bored piles. The results of the proposed method are shown to be in good agreement with those obtained in situ. PMID:29462857
Multipolar Ewald methods, 1: theory, accuracy, and performance.
Giese, Timothy J; Panteva, Maria T; Chen, Haoyuan; York, Darrin M
2015-02-10
The Ewald, Particle Mesh Ewald (PME), and Fast Fourier–Poisson (FFP) methods are developed for systems composed of spherical multipole moment expansions. A unified set of equations is derived that takes advantage of a spherical tensor gradient operator formalism in both real space and reciprocal space to allow extension to arbitrary multipole order. The implementation of these methods into a novel linear-scaling modified “divide-and-conquer” (mDC) quantum mechanical force field is discussed. The evaluation times and relative force errors are compared between the three methods, as a function of multipole expansion order. Timings and errors are also compared within the context of the quantum mechanical force field, which encounters primary errors related to the quality of reproducing electrostatic forces for a given density matrix and secondary errors resulting from the propagation of the approximate electrostatics into the self-consistent field procedure, which yields a converged, variational, but nonetheless approximate density matrix. Condensed-phase simulations of an mDC water model are performed with the multipolar PME method and compared to an electrostatic cutoff method, which is shown to artificially increase the density of water and heat of vaporization relative to full electrostatic treatment.
Resistance Tests of a 1/16 Size Model of the Hughes-kaiser Flying Boat, NACA Model 183
NASA Technical Reports Server (NTRS)
Posner, Jack; Woodward, David R.; Olson, Roland E.
1944-01-01
Tank tests were made of a hull model of the Hughes-Kaiser cargo airplane for estimates of take-off performance and maximum gross load for take-off. At hump speeds, with the model free to trim, the trim and resistance were high, which resulted in a load-resistance ratio of approximately 4.0 for a gross load coefficient of 0.75. With a 4000,000-lb load, the full size craft may take off in 69 sec over a distance of 5600 ft.
NASA Technical Reports Server (NTRS)
Kumar, A.
1984-01-01
A computer program NASCRIN has been developed for analyzing two-dimensional flow fields in high-speed inlets. It solves the two-dimensional Euler or Navier-Stokes equations in conservation form by an explicit, two-step finite-difference method. An explicit-implicit method can also be used at the user's discretion for viscous flow calculations. For turbulent flow, an algebraic, two-layer eddy-viscosity model is used. The code is operational on the CDC CYBER 203 computer system and is highly vectorized to take full advantage of the vector-processing capability of the system. It is highly user oriented and is structured in such a way that for most supersonic flow problems, the user has to make only a few changes. Although the code is primarily written for supersonic internal flow, it can be used with suitable changes in the boundary conditions for a variety of other problems.
Bian, Hao; Yang, Qing; Liu, Hewei; Chen, Feng; Du, Guangqing; Si, Jinhai; Hou, Xun
2013-03-01
Netlike or porous microstructures are highly desirable in metal implants and biomedical monitoring applications. However, realization of such microstructures remains technically challenging. Here, we report a facile and environmentally friendly method to prepare netlike microstructures on a stainless steel by taking the full advantage of the liquid-mediated femtosecond laser ablation. An unordered netlike structure and a quasi-ordered array of holes can be fabricated on the surface of stainless steel via an ethanol-mediated femtosecond laser line-scan method. SEM analysis of the surface morphology indicates that the porous netlike structure is in the micrometer scale and the diameter of the quasi-ordered holes ranges from 280 nm to 320 nm. Besides, we find that the obtained structures are tunable by altering the laser processing parameters especially scanning speed. Copyright © 2012 Elsevier B.V. All rights reserved.
Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao
2016-12-01
To address the low compression efficiency of lossless compression and the low image quality of general near-lossless compression, a novel near-lossless compression algorithm based on adaptive spatial prediction is proposed for medical sequence images for possible diagnostic use in this paper. The proposed method employs adaptive block size-based spatial prediction to predict blocks directly in the spatial domain and Lossless Hadamard Transform before quantization to improve the quality of reconstructed images. The block-based prediction breaks the pixel neighborhood constraint and takes full advantage of the local spatial correlations found in medical images. The adaptive block size guarantees a more rational division of images and the improved use of the local structure. The results indicate that the proposed algorithm can efficiently compress medical images and produces a better peak signal-to-noise ratio (PSNR) under the same pre-defined distortion than other near-lossless methods.
NASA Astrophysics Data System (ADS)
Vattré, A.; Devincre, B.; Feyel, F.; Gatti, R.; Groh, S.; Jamond, O.; Roos, A.
2014-02-01
A unified model coupling 3D dislocation dynamics (DD) simulations with the finite element (FE) method is revisited. The so-called Discrete-Continuous Model (DCM) aims to predict plastic flow at the (sub-)micron length scale of materials with complex boundary conditions. The evolution of the dislocation microstructure and the short-range dislocation-dislocation interactions are calculated with a DD code. The long-range mechanical fields due to the dislocations are calculated by a FE code, taking into account the boundary conditions. The coupling procedure is based on eigenstrain theory, and the precise manner in which the plastic slip, i.e. the dislocation glide as calculated by the DD code, is transferred to the integration points of the FE mesh is described in full detail. Several test cases are presented, and the DCM is applied to plastic flow in a single-crystal Nickel-based superalloy.
NASA Technical Reports Server (NTRS)
Green, James R.
1986-01-01
The Ada programming language was developed under the sponsorship of the Department of Defense to address the soaring costs associated with software development and maintenance. Ada is powerful, and yet to take full advantage of its power, it is sufficiently complex and different from current programming approaches that there is considerable risk associated with committing a program to be done in Ada. There are also few programs of any substantial size that have been implemented using Ada that may be studied to determine those management methods that resulted in a successful Ada project. The items presented are the author's opinions which have been formed as a result of going through an experience software development. The difficulties faced, risks assumed, management methods applied, and lessons learned, and most importantly, the techniques that were successful are all valuable sources of management information for those managers ready to assume major Ada developments projects.
NASA Astrophysics Data System (ADS)
Pertsev, N. A.; Zembilgotov, A. G.; Waser, R.
1998-08-01
The effective dielectric, piezoelectric, and elastic constants of polycrystalline ferroelectric materials are calculated from single-crystal data by an advanced method of effective medium, which takes into account the piezoelectric interactions between grains in full measure. For bulk BaTiO3 and PbTiO3 polarized ceramics, the dependences of material constants on the remanent polarization are reported. Dielectric and elastic constants are computed also for unpolarized c- and a-textured ferroelectric thin films deposited on cubic or amorphous substrates. It is found that the dielectric properties of BaTiO3 and PbTiO3 polycrystalline thin films strongly depend on the type of crystal texture. The influence of two-dimensional clamping by the substrate on the dielectric and piezoelectric responses of polarized films is described quantitatively and shown to be especially important for the piezoelectric charge coefficient of BaTiO3 films.
NASA Astrophysics Data System (ADS)
Liu, Zhanwen; Feng, Yan; Chen, Hang; Jiao, Licheng
2017-10-01
A novel and effective image fusion method is proposed for creating a highly informative and smooth surface of fused image through merging visible and infrared images. Firstly, a two-scale non-subsampled shearlet transform (NSST) is employed to decompose the visible and infrared images into detail layers and one base layer. Then, phase congruency is adopted to extract the saliency maps from the detail layers and a guided filtering is proposed to compute the filtering output of base layer and saliency maps. Next, a novel weighted average technique is used to make full use of scene consistency for fusion and obtaining coefficients map. Finally the fusion image was acquired by taking inverse NSST of the fused coefficients map. Experiments show that the proposed approach can achieve better performance than other methods in terms of subjective visual effect and objective assessment.
A full field, 3-D velocimeter for microgravity crystallization experiments
NASA Technical Reports Server (NTRS)
Brodkey, Robert S.; Russ, Keith M.
1991-01-01
The programming and algorithms needed for implementing a full-field, 3-D velocimeter for laminar flow systems and the appropriate hardware to fully implement this ultimate system are discussed. It appears that imaging using a synched pair of video cameras and digitizer boards with synched rails for camera motion will provide a viable solution to the laminar tracking problem. The algorithms given here are simple, which should speed processing. On a heavily loaded VAXstation 3100 the particle identification can take 15 to 30 seconds, with the tracking taking less than one second. It seeems reasonable to assume that four image pairs can thus be acquired and analyzed in under one minute.
Sahin, Sükran; Kurum, Ekrem
2009-09-01
Ecological monitoring is a complementary component of the overall environmental management and monitoring program of any Environmental Impact Assessment (EIA) report. The monitoring method should be developed for each project phase and allow for periodic reporting and assessment of compliance with the environmental conditions and requirements of the EIA. Also, this method should incorporate a variance request program since site-specific conditions can affect construction on a daily basis and require time-critical application of alternative construction scenarios or environmental management methods integrated with alternative mitigation measures. Finally, taking full advantage of the latest information and communication technologies can enhance the quality of, and public involvement in, the environmental management program. In this paper, a landscape-scale ecological monitoring method for major construction projects is described using, as a basis, 20 months of experience on the Baku-Tbilisi-Ceyhan (BTC) Crude Oil Pipeline Project, covering Turkish Sections Lot B and Lot C. This analysis presents suggestions for improving ecological monitoring for major construction activities.
A Novel Method to Increase LinLog CMOS Sensors’ Performance in High Dynamic Range Scenarios
Martínez-Sánchez, Antonio; Fernández, Carlos; Navarro, Pedro J.; Iborra, Andrés
2011-01-01
Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor’s maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method. PMID:22164083
A novel method to increase LinLog CMOS sensors' performance in high dynamic range scenarios.
Martínez-Sánchez, Antonio; Fernández, Carlos; Navarro, Pedro J; Iborra, Andrés
2011-01-01
Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor's maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method.
Nonrigid iterative closest points for registration of 3D biomedical surfaces
NASA Astrophysics Data System (ADS)
Liang, Luming; Wei, Mingqiang; Szymczak, Andrzej; Petrella, Anthony; Xie, Haoran; Qin, Jing; Wang, Jun; Wang, Fu Lee
2018-01-01
Advanced 3D optical and laser scanners bring new challenges to computer graphics. We present a novel nonrigid surface registration algorithm based on Iterative Closest Point (ICP) method with multiple correspondences. Our method, called the Nonrigid Iterative Closest Points (NICPs), can be applied to surfaces of arbitrary topology. It does not impose any restrictions on the deformation, e.g. rigidity or articulation. Finally, it does not require parametrization of input meshes. Our method is based on an objective function that combines distance and regularization terms. Unlike the standard ICP, the distance term is determined based on multiple two-way correspondences rather than single one-way correspondences between surfaces. A Laplacian-based regularization term is proposed to take full advantage of multiple two-way correspondences. This term regularizes the surface movement by enforcing vertices to move coherently with their 1-ring neighbors. The proposed method achieves good performances when no global pose differences or significant amount of bending exists in the models, for example, families of similar shapes, like human femur and vertebrae models.
Perceptual video quality assessment in H.264 video coding standard using objective modeling.
Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu
2014-01-01
Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.
Xia, Hongjing; Ruan, Dan; Cohen, Mark S.
2014-01-01
Ballistocardiogram (BCG) artifact remains a major challenge that renders electroencephalographic (EEG) signals hard to interpret in simultaneous EEG and functional MRI (fMRI) data acquisition. Here, we propose an integrated learning and inference approach that takes advantage of a commercial high-density EEG cap, to estimate the BCG contribution in noisy EEG recordings from inside the MR scanner. To estimate reliably the full-scalp BCG artifacts, a near-optimal subset (20 out of 256) of channels first was identified using a modified recording setup. In subsequent recordings inside the MR scanner, BCG-only signal from this subset of channels was used to generate continuous estimates of the full-scalp BCG artifacts via inference, from which the intended EEG signal was recovered. The reconstruction of the EEG was performed with both a direct subtraction and an optimization scheme. We evaluated the performance on both synthetic and real contaminated recordings, and compared it to the benchmark Optimal Basis Set (OBS) method. In the challenging non-event-related-potential (non-ERP) EEG studies, our reconstruction can yield more than fourteen-fold improvement in reducing the normalized RMS error of EEG signals, compared to OBS. PMID:25120421
Hodgin, Katie L; Graham, Dan J
2016-01-01
Previous research has indicated that self-awareness-inducing mirrors can successfully incite behaviors that align with one's personal values, such as helping others. Other research has found a large discrepancy between the high percentage of young adults who report valuing the healthfulness of physical activity (PA) and the low percentage who actually meet PA participation standards. However, few studies have examined how mirror exposure and both perceived and actual body size influence highly valued PA participation among college students. The present study assessed stair versus elevator use on a western college campus and hypothesized that mirror exposure would increase the more personally healthy transportation method of stair use. In accordance with previous research, it was also hypothesized that males and those with a lower body mass index (BMI) would be more likely to take the stairs, and that body size distorting mirrors would impact the stair-elevator decision. One hundred sixty-seven students (51% male) enrolled in an introductory psychology course were recruited to take a survey about their "transportation choices" at an indoor campus parking garage. Participants were individually exposed to either no mirror, a standard full-length mirror, or a full-length mirror manipulated to make the reflected body size appear either slightly thinner or slightly wider than normal before being asked to go to the fourth floor of the garage for a survey. Participants' choice of floor-climbing method (stairs or elevator) was recorded, and they were administered an Internet-based survey assessing demographic information, BMI, self-awareness, perceived body size, and other variables likely to be associated with stair use. Results from logistic regression analyses revealed that participants who were not exposed to a mirror [odds ratios (OR) = 0.37, 95% CI: 0.14-0.96], males (OR = 0.33, 95% CI: 0.13-0.85), those with lower BMI (OR = 0.84, 95% CI: 0.71-0.99), those with higher exercise participation (OR = 1.09, 95% CI: 1.02-1.18), and those engaging in more unhealthy weight-control behaviors (OR = 1.55, 95% CI: 1.14-2.11) showed increased odds of taking the stairs. Implications and future directions are discussed.
Hodgin, Katie L.; Graham, Dan J.
2016-01-01
Previous research has indicated that self-awareness-inducing mirrors can successfully incite behaviors that align with one’s personal values, such as helping others. Other research has found a large discrepancy between the high percentage of young adults who report valuing the healthfulness of physical activity (PA) and the low percentage who actually meet PA participation standards. However, few studies have examined how mirror exposure and both perceived and actual body size influence highly valued PA participation among college students. The present study assessed stair versus elevator use on a western college campus and hypothesized that mirror exposure would increase the more personally healthy transportation method of stair use. In accordance with previous research, it was also hypothesized that males and those with a lower body mass index (BMI) would be more likely to take the stairs, and that body size distorting mirrors would impact the stair–elevator decision. One hundred sixty-seven students (51% male) enrolled in an introductory psychology course were recruited to take a survey about their “transportation choices” at an indoor campus parking garage. Participants were individually exposed to either no mirror, a standard full-length mirror, or a full-length mirror manipulated to make the reflected body size appear either slightly thinner or slightly wider than normal before being asked to go to the fourth floor of the garage for a survey. Participants’ choice of floor-climbing method (stairs or elevator) was recorded, and they were administered an Internet-based survey assessing demographic information, BMI, self-awareness, perceived body size, and other variables likely to be associated with stair use. Results from logistic regression analyses revealed that participants who were not exposed to a mirror [odds ratios (OR) = 0.37, 95% CI: 0.14–0.96], males (OR = 0.33, 95% CI: 0.13–0.85), those with lower BMI (OR = 0.84, 95% CI: 0.71–0.99), those with higher exercise participation (OR = 1.09, 95% CI: 1.02–1.18), and those engaging in more unhealthy weight-control behaviors (OR = 1.55, 95% CI: 1.14–2.11) showed increased odds of taking the stairs. Implications and future directions are discussed. PMID:27200333
Haghighat, Roxanna; Cluver, Lucie
2017-01-01
Background Evidence on sexual risk-taking among HIV-positive adolescents and youth in sub-Saharan Africa is urgently needed. This systematic review synthesizes the extant research on prevalence, factors associated with, and interventions to reduce sexual risk-taking among HIV-positive adolescents and youth in sub-Saharan Africa. Methods Studies were located through electronic databases, grey literature, reference harvesting, and contact with researchers. Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines were followed. Quantitative studies that reported on HIV-positive participants (10–24 year olds), included data on at least one of eight outcomes (early sexual debut, inconsistent condom use, older partner, transactional sex, multiple sexual partners, sex while intoxicated, sexually transmitted infections, and pregnancy), and were conducted in sub-Saharan Africa were included. Two authors piloted all processes, screened studies, extracted data independently, and resolved any discrepancies. Due to variance in reported rates and factors associated with sexual risk-taking, meta-analyses were not conducted. Results 610 potentially relevant titles/abstracts resulted in the full text review of 251 records. Forty-two records (n = 35 studies) reported one or multiple sexual practices for 13,536 HIV-positive adolescents/youth from 13 sub-Saharan African countries. Seventeen cross-sectional studies reported on individual, relationship, family, structural, and HIV-related factors associated with sexual risk-taking. However, the majority of the findings were inconsistent across studies, and most studies scored <50% in the quality checklist. Living with a partner, living alone, gender-based violence, food insecurity, and employment were correlated with increased sexual risk-taking, while knowledge of own HIV-positive status and accessing HIV support groups were associated with reduced sexual risk-taking. Of the four intervention studies (three RCTs), three evaluated group-based interventions, and one evaluated an individual-focused combination intervention. Three of the interventions were effective at reducing sexual risk-taking, with one reporting no difference between the intervention and control groups. Conclusion Sexual risk-taking among HIV-positive adolescents and youth is high, with inconclusive evidence on potential determinants. Few known studies test secondary HIV-prevention interventions for HIV-positive youth. Effective and feasible low-cost interventions to reduce risk are urgently needed for this group. PMID:28582428
NASA Astrophysics Data System (ADS)
Schmäck, J.; Klotzsche, A.; Van Der Kruk, J.; Vereecken, H.; Bechtold, M.
2017-12-01
The characterization of peatlands is of particular interest, since areas with peat soils represent global hotspots for the exchange of greenhouse gases. Their effect on global warming depends on several parameters, like mean annual water level and land use. Models of greenhouse gas emissions and carbon accumulation in peatlands can be improved by including small-scale soil properties that e.g. act as gas traps and periodically release gases to the atmosphere during ebullition events. Ground penetrating radar (GPR) is well suited to non- or minimal invasively characterize and improve our understanding of dynamic processes that take place in the critical zone. It uses high frequency electromagnetic waves to image and characterize the dielectric permittivity and electrical conductivity of the critical zone, which can be related to hydrogeological properties like porosity, soil water content, salinity and clay content. In the last decade, the full-waveform inversion of crosshole GPR data has proved to be a powerful tool to improve the image resolution compared to standard ray-based methods. This approach was successfully applied to several different aquifers and was able to provide decimeter-scale resolution images including small-scale high contrast layers that can be related to zones of high porosity, zones of preferential flow or clay lenses. The comparison to independently measured e.g. logging data proved the reliability of the method. Here, for the first time crosshole GPR full-waveform inversion is used to image three peatland plots with different land use that are part of the "Ahlen-Falkenberger Moor peat bog complex" in northwestern Germany. The full-waveform inversion of the acquired data returned higher resolution images than standard ray-based GPR methods, and, is able to improve our understanding of subsurface structures. The comparison of the different plots is expected to provide new insights into gas content and gas trapping structures across different land uses. Additionally, season-related changes of peatland soil properties are investigated. The crosshole GPR full-waveform inversion was successfully applied to several datasets and the results show the utility and credibility of GPR FWI to analyze peatland properties.
Being parents with epilepsy: thoughts on its consequences and difficulties affecting their children.
Gauffin, Helena; Flensner, Gullvi; Landtblom, Anne-Marie
2015-01-01
Parents with epilepsy can be concerned about the consequences of epilepsy affecting their children. The aim of this paper is to describe aspects of what it means being a parent having epilepsy, focusing the parents' perspectives and their thoughts on having children. Fourteen adults aged 18-35 years with epilepsy and subjective memory decline took part in focus-group interviews. The interviews were conducted according to a semi-structured guideline. Material containing aspects of parenthood was extracted from the original interviews and a secondary analysis was done according to a content-analysis guideline. Interviews with two parents for the Swedish book Leva med epilepsi [To live with epilepsy] by AM Landtblom (Stockholm: Bilda ide; 2009) were analyzed according to the same method. Four themes emerged: (1) a persistent feeling of insecurity, since a seizure can occur at any time and the child could be hurt; (2) a feeling of inadequacy - of not being able to take full responsibility for one's child; (3) acknowledgment that one's children are forced to take more responsibility than other children do; and (4) a feeling of guilt - of not being able to fulfill one's expectations of being the parent one would like to be. The parents with epilepsy are deeply concerned about how epilepsy affects the lives of their children. These parents are always aware that a seizure may occur and reflect on how this can affect their child. They try to foresee possible dangerous situations and prevent them. These parents were sad that they could not always take full responsibility for their child and could not live up to their own expectations of parenthood. Supportive programs may be of importance since fear for the safety of the child increases the psychosocial burden of epilepsy. There were also a few parents who did not acknowledge the safety issue of their child - the authors believe that it is important to identify these parents and provide extra information and support to them.
Challenges in Species Tree Estimation Under the Multispecies Coalescent Model
Xu, Bo; Yang, Ziheng
2016-01-01
The multispecies coalescent (MSC) model has emerged as a powerful framework for inferring species phylogenies while accounting for ancestral polymorphism and gene tree-species tree conflict. A number of methods have been developed in the past few years to estimate the species tree under the MSC. The full likelihood methods (including maximum likelihood and Bayesian inference) average over the unknown gene trees and accommodate their uncertainties properly but involve intensive computation. The approximate or summary coalescent methods are computationally fast and are applicable to genomic datasets with thousands of loci, but do not make an efficient use of information in the multilocus data. Most of them take the two-step approach of reconstructing the gene trees for multiple loci by phylogenetic methods and then treating the estimated gene trees as observed data, without accounting for their uncertainties appropriately. In this article we review the statistical nature of the species tree estimation problem under the MSC, and explore the conceptual issues and challenges of species tree estimation by focusing mainly on simple cases of three or four closely related species. We use mathematical analysis and computer simulation to demonstrate that large differences in statistical performance may exist between the two classes of methods. We illustrate that several counterintuitive behaviors may occur with the summary methods but they are due to inefficient use of information in the data by summary methods and vanish when the data are analyzed using full-likelihood methods. These include (i) unidentifiability of parameters in the model, (ii) inconsistency in the so-called anomaly zone, (iii) singularity on the likelihood surface, and (iv) deterioration of performance upon addition of more data. We discuss the challenges and strategies of species tree inference for distantly related species when the molecular clock is violated, and highlight the need for improving the computational efficiency and model realism of the likelihood methods as well as the statistical efficiency of the summary methods. PMID:27927902
Full analogue electronic realisation of the Hodgkin-Huxley neuronal dynamics in weak-inversion CMOS.
Lazaridis, E; Drakakis, E M; Barahona, M
2007-01-01
This paper presents a non-linear analog synthesis path towards the modeling and full implementation of the Hodgkin-Huxley neuronal dynamics in silicon. The proposed circuits have been realized in weak-inversion CMOS technology and take advantage of both log-domain and translinear transistor-level techniques.
A Flexible Mobile Education System Approach
ERIC Educational Resources Information Center
Baloglu, Arzu
2007-01-01
Distance learning is appealing to small business owners, employees, municipalities, state establishments, non-governmental organizations. Distance-learning are ideal for people who have a full-time job or other commitments, who can't take time off to study full time. This might be a professional who needs to update his knowledge or skills, or a…
NASA Astrophysics Data System (ADS)
Kelley, Ryan P.
With an increasing quantity of spent nuclear fuel being stored at power plants across the United States, the demand exists for a new method of cask monitoring. Certifying these casks for transportation and long-term storage is a unique dilemma: their sealed nature lends added security, but at the cost of requiring non-invasive measurement techniques to verify their contents. This research will design and develop a new method of passively scanning spent fuel casks using 4He scintillation detectors to make this process more accurate. 4He detectors are a relatively new technological development whose full capabilities have not yet been exploited. These detectors take advantage of the high 4He cross section for elastic scattering at fast neutron energies, particularly the resonance around 1 MeV. If one of these elastic scattering interactions occurs within the detector, the 4He nucleus takes energy from the incident neutron, then de-excites by scintillation. Photomultiplier Tubes (PMTs) at either end of the detector tube convert this emitted light into an electrical signal. The goal of this research is to use the neutron spectroscopy features of 4He scintillation detectors to maintain accountability of spent fuel in storage. This project will support spent fuel safeguards and the detection of fissile material, in order to minimize the risk of nuclear proliferation and terrorism.
Clinical study of cultured epithelial autografts in liquid suspension in severe burn patients.
Yim, Haejun; Yang, Hyeong Tae; Cho, Yong Suk; Seo, Cheong Hoon; Lee, Boung Chul; Ko, Jang Hyu; Kwak, In Suk; Kim, Dohern; Hur, Jun; Kim, Jong Hyun; Chun, Wook
2011-09-01
We address the clinical application of the suspension type cultured epithelial autografts (CEAs), Keraheal™ (MCTT, Seoul, Korea), along with the effects, application method, merits and demerits thereof. From February 2007 to June 2010, 29 burn patients with extensive burns, participated in the suspension type of CEA clinical test. A widely meshed autograft (1:4-6 ratio) was applied to the wound bed and the suspension type CEA was sprayed with a Tissomat cell sprayer, followed by a Tissucol spray, a fibrin sealant. The patients' (men/women=26/3) median (interquartile ranges) age was 42 (30-49) years old, the burned TBSA was 55 (44-60) %, and the full thickness burn area was 40 (30-46.5) %. The area of Keraheal™ applied was 800 (400-1200) cm(2). The take rate was 96 (90.5-99) % and 100 (98.5-100) % at 2 and 4 weeks after treatment with Keraheal™, respectively. The Vancouver burn scar scale was 5 (4-6.5), 4 (3-6), and 3 (2-4) at 8, 12 and 24 weeks after the Keraheal™ application. Widely meshed autograft must be applied in massive burns but it's take rate is greatly reduced. The CEAs enhance the take rate of a wide meshed autograft in massive burns and allow for grafting wide meshed autograft together with acellular dermal matrix in some cases. Copyright © 2011 Elsevier Ltd and ISBI. All rights reserved.
Level set method for image segmentation based on moment competition
NASA Astrophysics Data System (ADS)
Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai
2015-05-01
We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.
Coupling hydrodynamic and wave propagation modeling for waveform modeling of SPE.
NASA Astrophysics Data System (ADS)
Larmat, C. S.; Steedman, D. W.; Rougier, E.; Delorey, A.; Bradley, C. R.
2015-12-01
The goal of the Source Physics Experiment (SPE) is to bring empirical and theoretical advances to the problem of detection and identification of underground nuclear explosions. This paper presents effort to improve knowledge of the processes that affect seismic wave propagation from the hydrodynamic/plastic source region to the elastic/anelastic far field thanks to numerical modeling. The challenge is to couple the prompt processes that take place in the near source region to the ones taking place later in time due to wave propagation in complex 3D geologic environments. In this paper, we report on results of first-principles simulations coupling hydrodynamic simulation codes (Abaqus and CASH), with a 3D full waveform propagation code, SPECFEM3D. Abaqus and CASH model the shocked, hydrodynamic region via equations of state for the explosive, borehole stemming and jointed/weathered granite. LANL has been recently employing a Coupled Euler-Lagrange (CEL) modeling capability. This has allowed the testing of a new phenomenological model for modeling stored shear energy in jointed material. This unique modeling capability has enabled highfidelity modeling of the explosive, the weak grout-filled borehole, as well as the surrounding jointed rock. SPECFEM3D is based on the Spectral Element Method, a direct numerical method for full waveform modeling with mathematical accuracy (e.g. Komatitsch, 1998, 2002) thanks to its use of the weak formulation of the wave equation and of high-order polynomial functions. The coupling interface is a series of grid points of the SEM mesh situated at the edge of the hydrodynamic code domain. Displacement time series at these points are computed from output of CASH or Abaqus (by interpolation if needed) and fed into the time marching scheme of SPECFEM3D. We will present validation tests and waveforms modeled for several SPE tests conducted so far, with a special focus on effect of the local topography.
NASA Astrophysics Data System (ADS)
Fabien-Ouellet, Gabriel; Gloaguen, Erwan; Giroux, Bernard
2017-03-01
Full Waveform Inversion (FWI) aims at recovering the elastic parameters of the Earth by matching recordings of the ground motion with the direct solution of the wave equation. Modeling the wave propagation for realistic scenarios is computationally intensive, which limits the applicability of FWI. The current hardware evolution brings increasing parallel computing power that can speed up the computations in FWI. However, to take advantage of the diversity of parallel architectures presently available, new programming approaches are required. In this work, we explore the use of OpenCL to develop a portable code that can take advantage of the many parallel processor architectures now available. We present a program called SeisCL for 2D and 3D viscoelastic FWI in the time domain. The code computes the forward and adjoint wavefields using finite-difference and outputs the gradient of the misfit function given by the adjoint state method. To demonstrate the code portability on different architectures, the performance of SeisCL is tested on three different devices: Intel CPUs, NVidia GPUs and Intel Xeon PHI. Results show that the use of GPUs with OpenCL can speed up the computations by nearly two orders of magnitudes over a single threaded application on the CPU. Although OpenCL allows code portability, we show that some device-specific optimization is still required to get the best performance out of a specific architecture. Using OpenCL in conjunction with MPI allows the domain decomposition of large models on several devices located on different nodes of a cluster. For large enough models, the speedup of the domain decomposition varies quasi-linearly with the number of devices. Finally, we investigate two different approaches to compute the gradient by the adjoint state method and show the significant advantages of using OpenCL for FWI.
Chameleonic Learner: Learning and Self-Assessment in Context
ERIC Educational Resources Information Center
Bourke, Roseanna
2010-01-01
Author Dr Roseanna Bourke takes the reader on a fascinating exploration of learning: the theory, practice and young people's take on it. What do you say to a young person who tells you her brain is an eighth full? Or to the one who says he only knows he has learned something when he receives a stamp or a sticker? This book is about how learners…
Opening Our Doors: Taking Public Library Service to Preschool and Day-Care Facilities.
ERIC Educational Resources Information Center
Harris, Sally
The Opening Our Doors Project of the Pioneer Library System of Norman, Oklahoma takes public library service to preschool and day care facilities by means of learning kits housed in tote bags. The sturdy, zippered tote bags are full of books, games, toys, learning folders, and so forth. There is a tote bag for each of 75 different topics. Topics…
For Some at U. of Florida, Spring and Summer Are the New Academic Year
ERIC Educational Resources Information Center
Hoover, Eric
2013-01-01
Some students at University of Florida can take classes only during the spring and summer semesters for as long as they are enrolled. Each year they will get a four-month break--the fall semester--when they can take online courses, study abroad, or do internships. Some may opt to work. Despite their schedules, the students are full-fledged…
Space Archaeology: Attribute, Object, Task and Method
NASA Astrophysics Data System (ADS)
Wang, Xinyuan; Guo, Huadong; Luo, Lei; Liu, Chuansheng
2017-04-01
Archaeology takes the material remains of human activity as the research object, and uses those fragmentary remains to reconstruct the humanistic and natural environment in different historical periods. Space Archaeology is a new branch of the Archaeology. Its study object is the humanistic-natural complex including the remains of human activities and living environments on the earth surface. The research method, space information technologies applied to this complex, is an innovative process concerning archaeological information acquisition, interpretation and reconstruction, and to achieve the 3-D dynamic reconstruction of cultural heritages by constructing the digital cultural-heritage sphere. Space archaeology's attribute is highly interdisciplinary linking several areas of natural and social and humanities. Its task is to reveal the history, characteristics, and patterns of human activities in the past, as well as to understand the evolutionary processes guiding the relationship between human and their environment. This paper summarizes six important aspects of space archaeology and five crucial recommendations for the establishment and development of this new discipline. The six important aspects are: (1) technologies and methods for non-destructive detection of archaeological sites; (2) space technologies for the protection and monitoring of cultural heritages; (3) digital environmental reconstruction of archaeological sites; (4) spatial data storage and data mining of cultural heritages; (5) virtual archaeology, digital reproduction and public information and presentation system; and (6) the construction of scientific platform of digital cultural-heritage sphere. The five key recommendations for establishing the discipline of Space Archaeology are: (1) encouraging the full integration of the strengths of both archaeology and museology with space technology to promote the development of space technologies' application for cultural heritages; (2) a new disciplinary framework for guiding current researches on space technologies for cultural heritages required; (3) the large cultural heritages desperately need to carrying out the key problems research of the theory-technology-application integration to obtain essential and overall scientific understanding of heritages; (4) focusing planning and implementation of major scientific programs on earth observation for cultural heritage, including those relevant to the development of theory and methods, technology combination and applicability, impact assessments and virtual reconstruction; and (5) taking full advantage of cultural heritages and earth observation sciences to strengthen space archaeology for improvements and refinements in both disciplinary practices and theoretical development. Several case studies along the ancient Silk Road were given to demonstrate the potential benefits of space archaeology.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-04
...), and if the permissible methods of taking and requirements pertaining to the mitigation, monitoring and... vibratory hammer extraction methods and structures will be removed via cable lifting. In addition... be removed via vibratory hammer extraction methods. Operations will begin on the pilings and...
50 CFR 218.232 - Permissible methods of taking.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Low Frequency Active (SURTASS LFA) Sonar § 218.232 Permissible methods of taking. (a) Under Letters of.... This annual per-stock cap of 12 percent applies regardless of the number of SURTASS LFA sonar vessels...
50 CFR 218.232 - Permissible methods of taking.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Low Frequency Active (SURTASS LFA) Sonar § 218.232 Permissible methods of taking. (a) Under Letters of.... This annual per-stock cap of 12 percent applies regardless of the number of SURTASS LFA sonar vessels...
High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.
Zhu, Xiangbin; Qiu, Huiling
2016-01-01
Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.
High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections
2016-01-01
Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved. PMID:27893761
Precise Determination of the Orientation of the Solar Image
NASA Astrophysics Data System (ADS)
Győri, L.
2010-12-01
Accurate heliographic coordinates of objects on the Sun have to be known in several fields of solar physics. One of the factors that affect the accuracy of the measurements of the heliographic coordinates is the accuracy of the orientation of a solar image. In this paper the well-known drift method for determining the orientation of the solar image is applied to data taken with a solar telescope equipped with a CCD camera. The factors that influence the accuracy of the method are systematically discussed, and the necessary corrections are determined. These factors are as follows: the trajectory of the center of the solar disk on the CCD with the telescope drive turned off, the astronomical refraction, the change of the declination of the Sun, and the optical distortion of the telescope. The method can be used on any solar telescope that is equipped with a CCD camera and is capable of taking solar full-disk images. As an example to illustrate the method and its application, the orientation of solar images taken with the Gyula heliograph is determined. As a byproduct, a new method to determine the optical distortion of a solar telescope is proposed.
Taguchi optimization of bismuth-telluride based thermoelectric cooler
NASA Astrophysics Data System (ADS)
Anant Kishore, Ravi; Kumar, Prashant; Sanghadasa, Mohan; Priya, Shashank
2017-07-01
In the last few decades, considerable effort has been made to enhance the figure-of-merit (ZT) of thermoelectric (TE) materials. However, the performance of commercial TE devices still remains low due to the fact that the module figure-of-merit not only depends on the material ZT, but also on the operating conditions and configuration of TE modules. This study takes into account comprehensive set of parameters to conduct the numerical performance analysis of the thermoelectric cooler (TEC) using a Taguchi optimization method. The Taguchi method is a statistical tool that predicts the optimal performance with a far less number of experimental runs than the conventional experimental techniques. Taguchi results are also compared with the optimized parameters obtained by a full factorial optimization method, which reveals that the Taguchi method provides optimum or near-optimum TEC configuration using only 25 experiments against 3125 experiments needed by the conventional optimization method. This study also shows that the environmental factors such as ambient temperature and cooling coefficient do not significantly affect the optimum geometry and optimum operating temperature of TECs. The optimum TEC configuration for simultaneous optimization of cooling capacity and coefficient of performance is also provided.
25 CFR 38.12 - Leave system for education personnel.
Code of Federal Regulations, 2011 CFR
2011-04-01
... receive up to 136 hours of school vacation time for use when school is not in session. School vacations... to work during the school vacation time or if the program will not permit school term employees to take such vacation time. (b) Leave for full-time, year-long employees. Employees who are on a full-time...
The Effect of Attending Full-Day Kindergarten on English Learner Students
ERIC Educational Resources Information Center
Cannon, Jill S.; Jacknowitz, Alison; Painter, Gary
2011-01-01
A significant and growing English learner (EL) population attends public schools in the United States. Evidence suggests they are at a disadvantage when entering school and their achievement lags behind non-EL students. Some educators have promoted full-day kindergarten programs as especially helpful for EL students. We take advantage of the large…
Alternative (non-animal) methods for cosmetics testing: current status and future prospects-2010.
Adler, Sarah; Basketter, David; Creton, Stuart; Pelkonen, Olavi; van Benthem, Jan; Zuang, Valérie; Andersen, Klaus Ejner; Angers-Loustau, Alexandre; Aptula, Aynur; Bal-Price, Anna; Benfenati, Emilio; Bernauer, Ulrike; Bessems, Jos; Bois, Frederic Y; Boobis, Alan; Brandon, Esther; Bremer, Susanne; Broschard, Thomas; Casati, Silvia; Coecke, Sandra; Corvi, Raffaella; Cronin, Mark; Daston, George; Dekant, Wolfgang; Felter, Susan; Grignard, Elise; Gundert-Remy, Ursula; Heinonen, Tuula; Kimber, Ian; Kleinjans, Jos; Komulainen, Hannu; Kreiling, Reinhard; Kreysa, Joachim; Leite, Sofia Batista; Loizou, George; Maxwell, Gavin; Mazzatorta, Paolo; Munn, Sharon; Pfuhler, Stefan; Phrakonkham, Pascal; Piersma, Aldert; Poth, Albrecht; Prieto, Pilar; Repetto, Guillermo; Rogiers, Vera; Schoeters, Greet; Schwarz, Michael; Serafimova, Rositsa; Tähti, Hanna; Testai, Emanuela; van Delft, Joost; van Loveren, Henk; Vinken, Mathieu; Worth, Andrew; Zaldivar, José-Manuel
2011-05-01
The 7th amendment to the EU Cosmetics Directive prohibits to put animal-tested cosmetics on the market in Europe after 2013. In that context, the European Commission invited stakeholder bodies (industry, non-governmental organisations, EU Member States, and the Commission's Scientific Committee on Consumer Safety) to identify scientific experts in five toxicological areas, i.e. toxicokinetics, repeated dose toxicity, carcinogenicity, skin sensitisation, and reproductive toxicity for which the Directive foresees that the 2013 deadline could be further extended in case alternative and validated methods would not be available in time. The selected experts were asked to analyse the status and prospects of alternative methods and to provide a scientifically sound estimate of the time necessary to achieve full replacement of animal testing. In summary, the experts confirmed that it will take at least another 7-9 years for the replacement of the current in vivo animal tests used for the safety assessment of cosmetic ingredients for skin sensitisation. However, the experts were also of the opinion that alternative methods may be able to give hazard information, i.e. to differentiate between sensitisers and non-sensitisers, ahead of 2017. This would, however, not provide the complete picture of what is a safe exposure because the relative potency of a sensitiser would not be known. For toxicokinetics, the timeframe was 5-7 years to develop the models still lacking to predict lung absorption and renal/biliary excretion, and even longer to integrate the methods to fully replace the animal toxicokinetic models. For the systemic toxicological endpoints of repeated dose toxicity, carcinogenicity and reproductive toxicity, the time horizon for full replacement could not be estimated.
PREFACE: International Symposium on Geohazards and Geomechanics (ISGG2015)
NASA Astrophysics Data System (ADS)
Utili, S.
2015-09-01
These Conference Proceedings contain the full papers in electronic format of the International Symposium on 'Geohazards and Geomechanics', held at University of Warwick, UK, on September 10-11, 2015. The Symposium brings together the complementary expertise of world leading groups carrying out research on the engineering assessment, prevention and mitigation of geohazards. A total of 58 papers, including 8 keynote lectures cover phenomena such as landslide initiation and propagation, debris flow, rockfalls, soil liquefaction, ground improvement, hazard zonation, risk mapping, floods and gas and leachates. The techniques reported in the papers to investigate geohazards involve numerical modeling (finite element method, discrete element method, material point method, meshless methods and particle methods), experimentation (laboratory experiments, centrifuge tests and field monitoring) and analytical simplified techniques. All the contributions in this volume have been peered reviewed according to rigorous international standards. However the authors take full responsibility for the content of their papers. Agreements are in place for the edition of a special issue dedicated to the Symposium in three international journals: Engineering Geology, Computational Particle Mechanics and International Journal of Geohazards and Environment. Authors of selected papers will be invited to submit an extended version of their work to these Journals that will independently assess the papers. The Symposium is supported by the Technical Committee 'Geo-mechanics from Micro to Macro' (TC105) of the International Society for Soil Mechanics and Geotechnical Engineering (ISSMGE), 'Slope Stability in Engineering Practice' (TC208), 'Forensic Geotechnical Engineering' (TC302), the British Geotechnical Association and the EU FP7 IRSES project 'Geohazards and Geomechanics'. Also the organizers would like to thank all authors and their supporting institutions for their contributions. For any further enquiries or information on the conference proceedings please contact the organizer, Dr Stefano Utili, University of Warwick, s.utili@warwick.ac.uk.
Baulac, Michel; Rosenow, Felix; Toledo, Manuel; Terada, Kiyohito; Li, Ting; De Backer, Marc; Werhahn, Konrad J; Brock, Melissa
2017-01-01
Further options for monotherapy are needed to treat newly diagnosed epilepsy in adults. We assessed the efficacy, safety, and tolerability of lacosamide as a first-line monotherapy option for these patients. In this phase 3, randomised, double-blind, non-inferiority trial, patients from 185 epilepsy or general neurology centres in Europe, North America, and the Asia Pacific region, aged 16 years or older and with newly diagnosed epilepsy were randomly assigned in a 1:1 ratio, via a computer-generated code, to receive lacosamide monotherapy or controlled-release carbamazepine (carbamazepine-CR) twice daily. Patients, investigators, and trial personnel were masked to treatment allocation. From starting doses of 100 mg/day lacosamide or 200 mg/day carbamazepine-CR, uptitration to the first target level of 200 mg/day and 400 mg/day, respectively, took place over 2 weeks. After a 1-week stabilisation period, patients entered a 6-month assessment period. If a seizure occurred, the dose was titrated to the next target level (400 or 600 mg/day for lacosamide and 800 or 1200 mg/day for carbamazepine-CR) over 2 weeks with a 1-week stabilisation period, and the 6-month assessment period began again. Patients who completed 6 months of treatment and remained seizure-free entered a 6-month maintenance period on the same dose. The primary efficacy outcome was the proportion of patients remaining free from seizures for 6 consecutive months after stabilisation at the last assessed dose. The predefined non-inferiority criteria were -12% absolute and -20% relative difference between treatment groups. This trial is registered with ClinicalTrials.gov, number NCT01243177. The trial was done between April 27, 2011, and Aug 7, 2015. 888 patients were randomly assigned treatment. 444 patients taking lacosamide and 442 taking carbamazepine-CR were included in the full analysis set (took at least one dose of study treatment), and 408 and 397, respectively, were included in the per-protocol set. In the full analysis set, 327 (74%) patients in the lacosamide group and 308 (70%) in the carbamazepine-CR group completed 6 months of treatment without seizures. The proportion of patients in the full analysis set predicted by the Kaplan-Meier method to be seizure-free at 6 months was 90% taking lacosamide and 91% taking carbamazepine-CR (absolute treatment-difference: -1·3%, 95% CI -5·5 to 2·8 relative treatment difference: -6·0%). Kaplan-Meier estimates results were similar in the per-protocol set (92% and 93%; -1·3%, -5·3 to 2·7; -5·7%). Treatment-emergent adverse events were reported in 328 (74%) patients receiving lacosamide and 332 (75%) receiving carbamazepine-CR. 32 (7%) patients taking lacosamide and 43 (10%) taking carbamazepine-CR had serious treatment-emergent adverse events, and 47 (11%) and 69 (16%), respectively, had treatment-emergent adverse events that led to withdrawal. Treatment with lacosamide met the predefined non-inferiority criteria when compared with carbamazepine-CR. Therefore, it might be useful as first-line monotherapy for adults with newly diagnosed epilepsy. UCB Pharma. Copyright © 2016 Elsevier Ltd. All rights reserved.
Exploring Space and Place with Walking Interviews
ERIC Educational Resources Information Center
Jones, Phil; Bunce, Griff; Evans, James; Gibbs, Hannah; Hein, Jane Ricketts
2008-01-01
This article explores the use of walking interviews as a research method. In spite of a wave of interest in methods which take interviewing out of the "safe," stationary environment, there has been limited work critically examining the techniques for undertaking such work. Curiously for a method which takes an explicitly spatial approach, few…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-12
... subsistence uses (where relevant), and if the permissible methods of taking and requirements pertaining to the... mouth of Chapman Bay. Pilings would be removed by vibratory hammer extraction methods and structures... day would be removed via vibratory hammer extraction methods. Typically the hammer vibrates for less...
ERIC Educational Resources Information Center
Hilton, John, III; Sweat, Anthony R.; Plummer, Kenneth
2015-01-01
The purpose of this study is to examine the relationship between student in-class note-taking and pre-class reading with perceived in-class spiritual and religious outcomes. This study surveyed 620 students enrolled in six different sections of an introductory religion course at a private religious university. Full-time religious faculty members…
NASA Astrophysics Data System (ADS)
Buchanan, Dennis J.; John, Reji; Brockman, Robert A.; Rosenberger, Andrew H.
2010-01-01
Shot peening is a commonly used surface treatment process that imparts compressive residual stresses into the surface of metal components. Compressive residual stresses retard initiation and growth of fatigue cracks. During component loading history, shot-peened residual stresses may change due to thermal exposure, creep, and cyclic loading. In these instances, taking full credit for compressive residual stresses would result in a nonconservative life prediction. This article describes a methodical approach for characterizing and modeling residual stress relaxation under elevated temperature loading, near and above the monotonic yield strength of INI 00. The model incorporates the dominant creep deformation mechanism, coupling between the creep and plasticity models, and effects of prior plastic strain to simulate surface treatment deformation.
Intelligent decision support algorithm for distribution system restoration.
Singh, Reetu; Mehfuz, Shabana; Kumar, Parmod
2016-01-01
Distribution system is the means of revenue for electric utility. It needs to be restored at the earliest if any feeder or complete system is tripped out due to fault or any other cause. Further, uncertainty of the loads, result in variations in the distribution network's parameters. Thus, an intelligent algorithm incorporating hybrid fuzzy-grey relation, which can take into account the uncertainties and compare the sequences is discussed to analyse and restore the distribution system. The simulation studies are carried out to show the utility of the method by ranking the restoration plans for a typical distribution system. This algorithm also meets the smart grid requirements in terms of an automated restoration plan for the partial/full blackout of network.
NASA Astrophysics Data System (ADS)
Eremina, G. M.; Smolin, A. Yu.; Psakhie, S. G.
2018-04-01
Mechanical properties of thin surface layers and coatings are commonly studied using instrumented indentation and scratch testing, where the mechanical response of the coating - substrate system essentially depends on the substrate material. It is quite difficult to distinguish this dependence and take it into account in the course of full-scale experiments due to a multivariative and nonlinear character of the influence. In this study the process of instrumented indentation of a hardening coating formed on different substrates is investigated numerically by the method of movable cellular automata. As a result of modeling, we identified the features of the substrate material influence on the derived mechanical characteristics of the coating - substrate systems and the processes of their deformation and fracture.
Understanding the Origin of Species with Genome-Scale Data: the Role of Gene Flow
Sousa, Vitor; Hey, Jody
2017-01-01
As it becomes easier to sequence multiple genomes from closely related species, evolutionary biologists working on speciation are struggling to get the most out of very large population-genomic data sets. Such data hold the potential to resolve evolutionary biology’s long-standing questions about the role of gene exchange in species formation. In principle the new population genomic data can be used to disentangle the conflicting roles of natural selection and gene flow during the divergence process. However there are great challenges in taking full advantage of such data, especially with regard to including recombination in genetic models of the divergence process. Current data, models, methods and the potential pitfalls in using them will be considered here. PMID:23657479
Real time standoff gas detection and environmental monitoring with LWIR hyperspectral imager
NASA Astrophysics Data System (ADS)
Prel, Florent; Moreau, Louis; Lavoie, Hugo; Bouffard, François; Thériault, Jean-Marc; Vallieres, Christian; Roy, Claude; Dubé, Denis
2012-10-01
MR-i is a dual band Hyperspectral Imaging Spectro-radiometer. This field instrument generates spectral datacubes in the MWIR and LWIR. MR-i is modular and can be configured in different ways. One of its configurations is optimized for the standoff measurements of gases in differential mode. In this mode, the instrument is equipped with a dual-input telescope to perform optical background subtraction. The resulting signal is the differential between the spectral radiance entering each input port. With that method, the signal from the background is automatically removed from the signal of the target of interest. The spectral range of this configuration extends in the VLWIR (cut-off near 14 μm) to take full advantage of the LW atmospheric window.
Techniques for using diazo materials in remote sensor data analysis
NASA Technical Reports Server (NTRS)
Whitebay, L. E.; Mount, S.
1978-01-01
The use of data derived from LANDSAT is facilitated when special products or computer enhanced images can be analyzed. However, the facilities required to produce and analyze such products prevent many users from taking full advantages of the LANDSAT data. A simple, low-cost method is presented by which users can make their own specially enhanced composite images from the four band black and white LANDSAT images by using the diazo process. The diazo process is described and a detailed procedure for making various color composites, such as color infrared, false natural color, and false color, is provided. The advantages and limitations of the diazo process are discussed. A brief discussion interpretation of diazo composites for land use mapping with some typical examples is included.
NASA Astrophysics Data System (ADS)
Wu, Fan; Cao, Pin; Yang, Yongying; Li, Chen; Chai, Huiting; Zhang, Yihui; Xiong, Haoliang; Xu, Wenlin; Yan, Kai; Zhou, Lin; Liu, Dong; Bai, Jian; Shen, Yibing
2016-11-01
The inspection of surface defects is one of significant sections of optical surface quality evaluation. Based on microscopic scattering dark-field imaging, sub-aperture scanning and stitching, the Surface Defects Evaluating System (SDES) can acquire full-aperture image of defects on optical elements surface and then extract geometric size and position information of defects with image processing such as feature recognization. However, optical distortion existing in the SDES badly affects the inspection precision of surface defects. In this paper, a distortion correction algorithm based on standard lattice pattern is proposed. Feature extraction, polynomial fitting and bilinear interpolation techniques in combination with adjacent sub-aperture stitching are employed to correct the optical distortion of the SDES automatically in high accuracy. Subsequently, in order to digitally evaluate surface defects with American standard by using American military standards MIL-PRF-13830B to judge the surface defects information obtained from the SDES, an American standard-based digital evaluation algorithm is proposed, which mainly includes a judgment method of surface defects concentration. The judgment method establishes weight region for each defect and adopts the method of overlap of weight region to calculate defects concentration. This algorithm takes full advantage of convenience of matrix operations and has merits of low complexity and fast in running, which makes itself suitable very well for highefficiency inspection of surface defects. Finally, various experiments are conducted and the correctness of these algorithms are verified. At present, these algorithms have been used in SDES.
Green, H A; Burd, E E; Nishioka, N S; Compton, C C
1993-08-01
Ablative lasers have been used for cutaneous surgery for greater than two decades since they can remove skin and skin lesions bloodlessly and efficiently. Because full-thickness skin wounds created after thermal laser ablation may require skin grafting in order to heal, we have examined the effect of the residual laser-induced thermal damage in the wound bed on subsequent skin graft take and healing. In a pig model, four different pulsed and continuous-wave lasers with varying wavelengths and radiant energy exposures were used to create uniform fascial graft bed thermal damage of approximately 25, 160, 470, and 1100 microns. Meshed split-thickness skin graft take and healing on the thermally damaged fascial graft beds were examined on a gross and microscopic level on days 3 and 7, and then weekly up to 42 days. Laser-induced thermal damage on the graft bed measuring greater than 160 +/- 60 microns in depth significantly decreased skin graft take. Other deleterious effects included delayed graft revascularization, increased inflammatory cell infiltrate at the graft-wound bed interface, and accelerated formation of hypertrophied fibrous tissue within the graft bed and underlying muscle. Ablative lasers developed for cutaneous surgery should create less than 160 +/- 60 microns of residual thermal damage to permit optimal skin graft take and healing. Pulsed carbon dioxide and 193-nm excimer lasers may be valuable instruments for the removal of full-thickness skin, skin lesions, and necrotic tissue, since they create wound beds with minimal thermal damage permitting graft take comparable to that achieved with standard surgical techniques.
Sahmel, Jennifer; Barlow, Christy A; Gaffney, Shannon; Avens, Heather J; Madl, Amy K; Henshaw, John; Unice, Ken; Galbraith, David; DeRose, Gretchen; Lee, Richard J; Van Orden, Drew; Sanchez, Matthew; Zock, Matthew; Paustenbach, Dennis J
2016-01-01
The potential for para-occupational, domestic, or take-home exposures from asbestos-contaminated work clothing has been acknowledged for decades, but historically has not been quantitatively well characterized. A simulation study was performed to measure airborne chrysotile concentrations associated with laundering of contaminated clothing worn during a full shift work day. Work clothing fitted onto mannequins was exposed for 6.5 h to an airborne concentration of 11.4 f/cc (PCME) of chrysotile asbestos, and was subsequently handled and shaken. Mean 5-min and 15-min concentrations during active clothes handling and shake-out were 3.2 f/cc and 2.9 f/cc, respectively (PCME). Mean airborne PCME concentrations decreased by 55% 15 min after clothes handling ceased, and by 85% after 30 min. PCM concentrations during clothes handling were 11-47% greater than PCME concentrations. Consistent with previously published data, daily mean 8-h TWA airborne concentrations for clothes-handling activity were approximately 1.0% of workplace concentrations. Similarly, weekly 40-h TWAs for clothes handling were approximately 0.20% of workplace concentrations. Estimated take-home cumulative exposure estimates for weekly clothes handling over 25-year working durations were below 1 f/cc-year for handling work clothes contaminated in an occupational environment with full shift airborne chrysotile concentrations of up to 9 f/cc (8-h TWA).
The Coast Artillery Journal. Volume 80, Number 6, November-December 1937
1937-12-01
the Navy needs. Plants manufac_ We have in this country ample resources for our full mili- turing these items have been assigned exclusively to the...out the general provisions of our plan for the mobiltzatlOI Our plan of distribution takes full cognizance of both of manpower, and of industry, and of...of the Army, but also those of the arise when it will have to be put into full effect. GUNS IN SPAIN FRANCO "THREE-HUNDRED yards be:’ond the next
Evaluating meta-ethnography: systematic analysis and synthesis of qualitative research.
Campbell, R; Pound, P; Morgan, M; Daker-White, G; Britten, N; Pill, R; Yardley, L; Pope, C; Donovan, J
2011-12-01
Methods for reviewing and synthesising findings from quantitative research studies in health care are well established. Although there is recognition of the need for qualitative research to be brought into the evidence base, there is no consensus about how this should be done and the methods for synthesising qualitative research are at a relatively early stage of development. To evaluate meta-ethnography as a method for synthesising qualitative research studies in health and health care. Two full syntheses of qualitative research studies were conducted between April 2002 and September 2004 using meta-ethnography: (1) studies of medicine-taking and (2) studies exploring patients' experiences of living with rheumatoid arthritis. Potentially relevant studies identified in multiple literature searches conducted in July and August 2002 (electronically and by hand) were appraised using a modified version of the Critical Appraisal Skills Programme questions for understanding qualitative research. Candidate papers were excluded on grounds of lack of relevance to the aims of the synthesis or because the work failed to employ qualitative methods of data collection and analysis. Thirty-eight studies were entered into the medicine-taking synthesis, one of which did not contribute to the final synthesis. The synthesis revealed a general caution about taking medicine, and that the practice of lay testing of medicines was widespread. People were found to take their medicine passively or actively or to reject it outright. Some, in particular clinical areas, were coerced into taking it. Those who actively accepted their medicine often modified the regimen prescribed by a doctor, without the doctor's knowledge. The synthesis concluded that people often do not take their medicines as prescribed because of concern about the medicines themselves. 'Resistance' emerged from the synthesis as a concept that best encapsulated the lay response to prescribed medicines. It was suggested that a policy focus should be on the problems associated with the medicines themselves and on evaluating the effectiveness of alternative treatments that some people use in preference to prescribed medicines. The synthesis of studies of lay experiences of living with rheumatoid arthritis began with 29 papers. Four could not be synthesised, leaving 25 papers (describing 22 studies) contributing to the final synthesis. Most of the papers were concerned with the everyday experience of living with rheumatoid arthritis. This synthesis did not produce significant new insights, probably because the early papers in the area were substantial and theoretically rich, and later papers were mostly confirmatory. In both topic areas, only a minority of the studies included in the syntheses were found to have referenced each other, suggesting that unnecessary replication had occurred. We only evaluated meta-ethnography as a method for synthesising qualitative research, but there are other methods being employed. Further research is required to investigate how different methods of qualitative synthesis influence the outcome of the synthesis. Meta-ethnography is an effective method for synthesising qualitative research. The process of reciprocally translating the findings from each individual study into those from all the other studies in the synthesis, if applied rigorously, ensures that qualitative data can be combined. Following this essential process, the synthesis can then be expressed as a 'line of argument' that can be presented as text and in summary tables and diagrams or models. Meta-ethnography can produce significant new insights, but not all meta-ethnographic syntheses do so. Instead, some will identify fields in which saturation has been reached and in which no theoretical development has taken place for some time. Both outcomes are helpful in either moving research forward or avoiding wasted resources. Meta-ethnography is a highly interpretative method requiring considerable immersion in the individual studies to achieve a synthesis. It places substantial demands upon the synthesiser and requires a high degree of qualitative research skill. Meta-ethnography has great potential as a method of synthesis in qualitative health technology assessment but it is still evolving and cannot, at present, be regarded as a standardised approach capable of application in a routinised way. Funding for this study was provided by the Health Technology Assessment programme of the National Institute for Health Research.
The four-principle formulation of common morality is at the core of bioethics mediation method.
Ahmadi Nasab Emran, Shahram
2015-08-01
Bioethics mediation is increasingly used as a method in clinical ethics cases. My goal in this paper is to examine the implicit theoretical assumptions of the bioethics mediation method developed by Dubler and Liebman. According to them, the distinguishing feature of bioethics mediation is that the method is useful in most cases of clinical ethics in which conflict is the main issue, which implies that there is either no real ethical issue or if there were, they are not the key to finding a resolution. I question the tacit assumption of non-normativity of the mediation method in bioethics by examining the various senses in which bioethics mediation might be non-normative or neutral. The major normative assumption of the mediation method is the existence of common morality. In addition, the four-principle formulation of the theory articulated by Beauchamp and Childress implicitly provides the normative content for the method. Full acknowledgement of the theoretical and normative assumptions of bioethics mediation helps clinical ethicists better understand the nature of their job. In addition, the need for a robust philosophical background even in what appears to be a purely practical method of mediation cannot be overemphasized. Acknowledgement of the normative nature of bioethics mediation method necessitates a more critical attitude of the bioethics mediators towards the norms they usually take for granted uncritically as valid.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... (c) Each chief facility operator and shift supervisor must take one of three actions: (1) Obtain a... in your State. (2) Schedule a full certification exam with the American Society of Mechanical Engineers (QRO-1-1994) (incorporated by reference in § 60.17(h)(1)). (3) Schedule a full certification exam...
Experiencing Term-Time Employment as a Non-Traditional Aged University Student: A Welsh Study
ERIC Educational Resources Information Center
Mercer, Jenny; Clay, James; Etheridge, Leanne
2016-01-01
Engaging in term-time employment appears to be becoming a common feature of contemporary UK student life. This study examined the ways in which a cohort of full-time non-traditional aged students negotiated paid employment whilst pursuing a full-time higher education course in Wales. Taking a qualitative approach to explore this further,…
Development of a Model for Human Operator Learning in Continuous Estimation and Control Tasks.
1983-12-01
and (3) a " precognitive mode" in 17 which the pilot is able to take full advantage of any predictability "" inherent in the external inputs and can...allow application of a partial feedforward strategy; and (3) a " precognitive " mode in which full advantage is taken of any predictability of the
Primary Care Sports Medicine: A Full-Timer's Perspective.
ERIC Educational Resources Information Center
Moats, William E.
1988-01-01
This article describes the history and structure of a sports medicine facility, the patient care services it offers, and the types of injuries treated at the center. Opportunities and potentials for physicians who wish to enter the field of sports medicine on a full-time basis are described, as are steps to take to prepare to do so. (Author/JL)
Gourmet Lab: The Scientific Principles Behind Your Favorite Foods
ERIC Educational Resources Information Center
Young, Sarah
2011-01-01
Hands-on, inquiry-based, and relevant to every student's life, "Gourmet Lab" serves up a full menu of activities for science teachers of grades 6-12. This collection of 15 hands-on experiments--each of which includes a full set of both student and teacher pages--challenges students to take on the role of scientist and chef, as they boil,…
ERIC Educational Resources Information Center
Hamer, Jennifer; Marchioro, Kathleen
2002-01-01
Explores circumstances in which working-class and low-income custodial African American fathers (N=24) gained custody of their children and transitioned to full-time parenting. Findings suggest that these men are often reluctant to take on single, full-time parenting role. Adaptation to role seems to be enhanced by use of extended kin support…
Bayesian calibration for forensic age estimation.
Ferrante, Luigi; Skrami, Edlira; Gesuita, Rosaria; Cameriere, Roberto
2015-05-10
Forensic medicine is increasingly called upon to assess the age of individuals. Forensic age estimation is mostly required in relation to illegal immigration and identification of bodies or skeletal remains. A variety of age estimation methods are based on dental samples and use of regression models, where the age of an individual is predicted by morphological tooth changes that take place over time. From the medico-legal point of view, regression models, with age as the dependent random variable entail that age tends to be overestimated in the young and underestimated in the old. To overcome this bias, we describe a new full Bayesian calibration method (asymmetric Laplace Bayesian calibration) for forensic age estimation that uses asymmetric Laplace distribution as the probability model. The method was compared with three existing approaches (two Bayesian and a classical method) using simulated data. Although its accuracy was comparable with that of the other methods, the asymmetric Laplace Bayesian calibration appears to be significantly more reliable and robust in case of misspecification of the probability model. The proposed method was also applied to a real dataset of values of the pulp chamber of the right lower premolar measured on x-ray scans of individuals of known age. Copyright © 2015 John Wiley & Sons, Ltd.
Adaptive Swarm Balancing Algorithms for rare-event prediction in imbalanced healthcare data
Wong, Raymond K.; Mohammed, Sabah; Fiaidhi, Jinan; Sung, Yunsick
2017-01-01
Clinical data analysis and forecasting have made substantial contributions to disease control, prevention and detection. However, such data usually suffer from highly imbalanced samples in class distributions. In this paper, we aim to formulate effective methods to rebalance binary imbalanced dataset, where the positive samples take up only the minority. We investigate two different meta-heuristic algorithms, particle swarm optimization and bat algorithm, and apply them to empower the effects of synthetic minority over-sampling technique (SMOTE) for pre-processing the datasets. One approach is to process the full dataset as a whole. The other is to split up the dataset and adaptively process it one segment at a time. The experimental results reported in this paper reveal that the performance improvements obtained by the former methods are not scalable to larger data scales. The latter methods, which we call Adaptive Swarm Balancing Algorithms, lead to significant efficiency and effectiveness improvements on large datasets while the first method is invalid. We also find it more consistent with the practice of the typical large imbalanced medical datasets. We further use the meta-heuristic algorithms to optimize two key parameters of SMOTE. The proposed methods lead to more credible performances of the classifier, and shortening the run time compared to brute-force method. PMID:28753613
Privacy-preserving record linkage on large real world datasets.
Randall, Sean M; Ferrante, Anna M; Boyd, James H; Bauer, Jacqueline K; Semmens, James B
2014-08-01
Record linkage typically involves the use of dedicated linkage units who are supplied with personally identifying information to determine individuals from within and across datasets. The personally identifying information supplied to linkage units is separated from clinical information prior to release by data custodians. While this substantially reduces the risk of disclosure of sensitive information, some residual risks still exist and remain a concern for some custodians. In this paper we trial a method of record linkage which reduces privacy risk still further on large real world administrative data. The method uses encrypted personal identifying information (bloom filters) in a probability-based linkage framework. The privacy preserving linkage method was tested on ten years of New South Wales (NSW) and Western Australian (WA) hospital admissions data, comprising in total over 26 million records. No difference in linkage quality was found when the results were compared to traditional probabilistic methods using full unencrypted personal identifiers. This presents as a possible means of reducing privacy risks related to record linkage in population level research studies. It is hoped that through adaptations of this method or similar privacy preserving methods, risks related to information disclosure can be reduced so that the benefits of linked research taking place can be fully realised. Copyright © 2013 Elsevier Inc. All rights reserved.
Material selection and assembly method of battery pack for compact electric vehicle
NASA Astrophysics Data System (ADS)
Lewchalermwong, N.; Masomtob, M.; Lailuck, V.; Charoenphonphanich, C.
2018-01-01
Battery packs become the key component in electric vehicles (EVs). The main costs of which are battery cells and assembling processes. The battery cell is indeed priced from battery manufacturers while the assembling cost is dependent on battery pack designs. Battery pack designers need overall cost as cheap as possible, but it still requires high performance and more safety. Material selection and assembly method as well as component design are very important to determine the cost-effectiveness of battery modules and battery packs. Therefore, this work presents Decision Matrix, which can aid in the decision-making process of component materials and assembly methods for a battery module design and a battery pack design. The aim of this study is to take the advantage of incorporating Architecture Analysis method into decision matrix methods by capturing best practices for conducting design architecture analysis in full account of key design components critical to ensure efficient and effective development of the designs. The methodology also considers the impacts of choice-alternatives along multiple dimensions. Various alternatives for materials and assembly techniques of battery pack are evaluated, and some sample costs are presented. Due to many components in the battery pack, only seven components which are positive busbar and Z busbar are represented in this paper for using decision matrix methods.
A method for measuring the inertia properties of rigid bodies
NASA Astrophysics Data System (ADS)
Gobbi, M.; Mastinu, G.; Previati, G.
2011-01-01
A method for the measurement of the inertia properties of rigid bodies is presented. Given a rigid body and its mass, the method allows to measure (identify) the centre of gravity location and the inertia tensor during a single test. The proposed technique is based on the analysis of the free motion of a multi-cable pendulum to which the body under consideration is connected. The motion of the pendulum and the forces acting on the system are recorded and the inertia properties are identified by means of a proper mathematical procedure based on a least square estimation. After the body is positioned on the test rig, the full identification procedure takes less than 10 min. The natural frequencies of the pendulum and the accelerations involved are quite low, making this method suitable for many practical applications. In this paper, the proposed method is described and two test rigs are presented: the first is developed for bodies up to 3500 kg and the second for bodies up to 400 kg. A validation of the measurement method is performed with satisfactory results. The test rig holds a third part quality certificate according to an ISO 9001 standard and could be scaled up to measure the inertia properties of huge bodies, such as trucks, airplanes or even ships.
Extended depth measurement for a Stokes sample imaging polarimeter
NASA Astrophysics Data System (ADS)
Dixon, Alexander W.; Taberner, Andrew J.; Nash, Martyn P.; Nielsen, Poul M. F.
2018-02-01
A non-destructive imaging technique is required for quantifying the anisotropic and heterogeneous structural arrangement of collagen in soft tissue membranes, such as bovine pericardium, which are used in the construction of bioprosthetic heart valves. Previously, our group developed a Stokes imaging polarimeter that measures the linear birefringence of samples in a transmission arrangement. With this device, linear retardance and optic axis orientation; can be estimated over a sample using simple vector algebra on Stokes vectors in the Poincaré sphere. However, this method is limited to a single path retardation of a half-wave, limiting the thickness of samples that can be imaged. The polarimeter has been extended to allow illumination of narrow bandwidth light of controllable wavelength through achromatic lenses and polarization optics. We can now take advantage of the wavelength dependence of relative retardation to remove ambiguities that arise when samples have a single path retardation of a half-wave to full-wave. This effectively doubles the imaging depth of this method. The method has been validated using films of cellulose of varied thickness, and applied to samples of bovine pericardium.
Understanding GRETINA using angular correlation method
NASA Astrophysics Data System (ADS)
Austin, Madeline
2015-10-01
The ability to trace the path of gamma rays through germanium is not only necessary for taking full advantage of GRETINA but also a promising possibility for homeland security defense against nuclear threats. This research tested the current tracking algorithm using the angular correlation method by comparing results from raw and tracked data to the theoretical model for Co-60. It was found that the current tracking method is unsuccessful in reproducing angular correlation. Variations to the tracking algorithm were made in the FM value, tracking angle, number of angles of separation observed, and window of coincidence in attempt to improve correlation results. From these variations it was observed that having a larger FM improved results, reducing the number of observational angles worsened correlation, and that overall larger tracking angles improved with larger windows of coincidence and vice-verse. Future research would be to refine the angle of measurement for raw data and to explore the possibility of an energy dependence by testing other elements. This work is supported by the United States Department of Energy, Office of Science, under Contract Number DE-AC02-06CH11357
Kriging in the Shadows: Geostatistical Interpolation for Remote Sensing
NASA Technical Reports Server (NTRS)
Rossi, Richard E.; Dungan, Jennifer L.; Beck, Louisa R.
1994-01-01
It is often useful to estimate obscured or missing remotely sensed data. Traditional interpolation methods, such as nearest-neighbor or bilinear resampling, do not take full advantage of the spatial information in the image. An alternative method, a geostatistical technique known as indicator kriging, is described and demonstrated using a Landsat Thematic Mapper image in southern Chiapas, Mexico. The image was first classified into pasture and nonpasture land cover. For each pixel that was obscured by cloud or cloud shadow, the probability that it was pasture was assigned by the algorithm. An exponential omnidirectional variogram model was used to characterize the spatial continuity of the image for use in the kriging algorithm. Assuming a cutoff probability level of 50%, the error was shown to be 17% with no obvious spatial bias but with some tendency to categorize nonpasture as pasture (overestimation). While this is a promising result, the method's practical application in other missing data problems for remotely sensed images will depend on the amount and spatial pattern of the unobscured pixels and missing pixels and the success of the spatial continuity model used.
QR images: optimized image embedding in QR codes.
Garateguy, Gonzalo J; Arce, Gonzalo R; Lau, Daniel L; Villarreal, Ofelia P
2014-07-01
This paper introduces the concept of QR images, an automatic method to embed QR codes into color images with bounded probability of detection error. These embeddings are compatible with standard decoding applications and can be applied to any color image with full area coverage. The QR information bits are encoded into the luminance values of the image, taking advantage of the immunity of QR readers against local luminance disturbances. To mitigate the visual distortion of the QR image, the algorithm utilizes halftoning masks for the selection of modified pixels and nonlinear programming techniques to locally optimize luminance levels. A tractable model for the probability of error is developed and models of the human visual system are considered in the quality metric used to optimize the luminance levels of the QR image. To minimize the processing time, the optimization techniques proposed to consider the mechanics of a common binarization method and are designed to be amenable for parallel implementations. Experimental results show the graceful degradation of the decoding rate and the perceptual quality as a function the embedding parameters. A visual comparison between the proposed and existing methods is presented.
NASA Astrophysics Data System (ADS)
Song, Ge; Tang, Xi; Zhu, Feng
2018-05-01
Traditional university maps, taking campus as the principal body, mainly realize the abilities of space localization and navigation. They don't take full advantage of map, such as multi-scale representations and thematic geo-graphical information visualization. And their inherent propaganda functions have not been entirely developed. Therefore, we tried to take East China Normal University (ECNU) located in Shanghai as an example, and integrated various information related to university propaganda need (like spatial patterns, history and culture, landscape ecology, disciplinary constructions, cooperation, social services, development plans and so on). We adopted the frontier knowledge of `information design' as well as kinds of information graphics and visualization solutions. As a result, we designed and compiled a prototype atlas of `ECNU Impression' to provide a series of views of ECNU, which practiced a new model of `narrative campus map'. This innovative propaganda product serves as a supplement to typical shows with official authority, data maturity, scientificity, dimension diversity, and timing integrity. The university atlas will become a usable media for university overall figure shaping.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keever, J.R.
1994-12-31
Fundamental change in K-12 science education in the United States, essential for full citizenship in an increasingly technological world, will take a decade or more to accomplish, and only the sustained, cooperative efforts of people in their own communities -- scientists, teachers, and concerned citizens -- will likely ensure success. These were among the themes at Sigma Xi`s national K-12 science education forum.
Facilities | Computational Science | NREL
technology innovation by providing scientists and engineers the ability to tackle energy challenges that scientists and engineers to take full advantage of advanced computing hardware and software resources
Code of Federal Regulations, 2013 CFR
2013-07-01
... Public Contracts and Property Management Federal Property Management Regulations System (Continued... of the disposition method used: (a) You must maintain property in a safe, secure, and cost-effective...
50 CFR 216.232 - Permissible methods of taking.
Code of Federal Regulations, 2010 CFR
2010-10-01
... not intentionally, take Steller sea lions by Level B harassment, take adult Pacific harbor seals by Level B harassment, and take harbor seal pups by Level B or Level A harassment or mortality, in the course of conducting missile launch activities within the area described in § 216.230(a), provided all...
Note-Taking Made Easy. The Study Smart Series.
ERIC Educational Resources Information Center
Kesselman-Turkel, Judi; Peterson, Franklynn
This book describes two successful methods of organizing notes (outlining and patterning), providing shortcuts to make note taking easy. Eight chapters include: (1) "There's No Substitute for Taking Your Own Good Notes" (e.g., note taking helps in paying attention and remembering); (2) "How to Tell What's Worth Noting" (criteria for deciding what…
Speckle noise reduction for optical coherence tomography based on adaptive 2D dictionary
NASA Astrophysics Data System (ADS)
Lv, Hongli; Fu, Shujun; Zhang, Caiming; Zhai, Lin
2018-05-01
As a high-resolution biomedical imaging modality, optical coherence tomography (OCT) is widely used in medical sciences. However, OCT images often suffer from speckle noise, which can mask some important image information, and thus reduce the accuracy of clinical diagnosis. Taking full advantage of nonlocal self-similarity and adaptive 2D-dictionary-based sparse representation, in this work, a speckle noise reduction algorithm is proposed for despeckling OCT images. To reduce speckle noise while preserving local image features, similar nonlocal patches are first extracted from the noisy image and put into groups using a gamma- distribution-based block matching method. An adaptive 2D dictionary is then learned for each patch group. Unlike traditional vector-based sparse coding, we express each image patch by the linear combination of a few matrices. This image-to-matrix method can exploit the local correlation between pixels. Since each image patch might belong to several groups, the despeckled OCT image is finally obtained by aggregating all filtered image patches. The experimental results demonstrate the superior performance of the proposed method over other state-of-the-art despeckling methods, in terms of objective metrics and visual inspection.
Naveja, J. Jesús; Medina-Franco, José L.
2017-01-01
We present a novel approach called ChemMaps for visualizing chemical space based on the similarity matrix of compound datasets generated with molecular fingerprints’ similarity. The method uses a ‘satellites’ approach, where satellites are, in principle, molecules whose similarity to the rest of the molecules in the database provides sufficient information for generating a visualization of the chemical space. Such an approach could help make chemical space visualizations more efficient. We hereby describe a proof-of-principle application of the method to various databases that have different diversity measures. Unsurprisingly, we found the method works better with databases that have low 2D diversity. 3D diversity played a secondary role, although it seems to be more relevant as 2D diversity increases. For less diverse datasets, taking as few as 25% satellites seems to be sufficient for a fair depiction of the chemical space. We propose to iteratively increase the satellites number by a factor of 5% relative to the whole database, and stop when the new and the prior chemical space correlate highly. This Research Note represents a first exploratory step, prior to the full application of this method for several datasets. PMID:28794856
Naveja, J Jesús; Medina-Franco, José L
2017-01-01
We present a novel approach called ChemMaps for visualizing chemical space based on the similarity matrix of compound datasets generated with molecular fingerprints' similarity. The method uses a 'satellites' approach, where satellites are, in principle, molecules whose similarity to the rest of the molecules in the database provides sufficient information for generating a visualization of the chemical space. Such an approach could help make chemical space visualizations more efficient. We hereby describe a proof-of-principle application of the method to various databases that have different diversity measures. Unsurprisingly, we found the method works better with databases that have low 2D diversity. 3D diversity played a secondary role, although it seems to be more relevant as 2D diversity increases. For less diverse datasets, taking as few as 25% satellites seems to be sufficient for a fair depiction of the chemical space. We propose to iteratively increase the satellites number by a factor of 5% relative to the whole database, and stop when the new and the prior chemical space correlate highly. This Research Note represents a first exploratory step, prior to the full application of this method for several datasets.
Clustering of Multi-Temporal Fully Polarimetric L-Band SAR Data for Agricultural Land Cover Mapping
NASA Astrophysics Data System (ADS)
Tamiminia, H.; Homayouni, S.; Safari, A.
2015-12-01
Recently, the unique capabilities of Polarimetric Synthetic Aperture Radar (PolSAR) sensors make them an important and efficient tool for natural resources and environmental applications, such as land cover and crop classification. The aim of this paper is to classify multi-temporal full polarimetric SAR data using kernel-based fuzzy C-means clustering method, over an agricultural region. This method starts with transforming input data into the higher dimensional space using kernel functions and then clustering them in the feature space. Feature space, due to its inherent properties, has the ability to take in account the nonlinear and complex nature of polarimetric data. Several SAR polarimetric features extracted using target decomposition algorithms. Features from Cloude-Pottier, Freeman-Durden and Yamaguchi algorithms used as inputs for the clustering. This method was applied to multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Canada, during June and July in 2012. The results demonstrate the efficiency of this approach with respect to the classical methods. In addition, using multi-temporal data in the clustering process helped to investigate the phenological cycle of plants and significantly improved the performance of agricultural land cover mapping.
Reduction in training time of a deep learning model in detection of lesions in CT
NASA Astrophysics Data System (ADS)
Makkinejad, Nazanin; Tajbakhsh, Nima; Zarshenas, Amin; Khokhar, Ashfaq; Suzuki, Kenji
2018-02-01
Deep learning (DL) emerged as a powerful tool for object detection and classification in medical images. Building a well-performing DL model, however, requires a huge number of images for training, and it takes days to train a DL model even on a cutting edge high-performance computing platform. This study is aimed at developing a method for selecting a "small" number of representative samples from a large collection of training samples to train a DL model for the could be used to detect polyps in CT colonography (CTC), without compromising the classification performance. Our proposed method for representative sample selection (RSS) consists of a K-means clustering algorithm. For the performance evaluation, we applied the proposed method to select samples for the training of a massive training artificial neural network based DL model, to be used for the classification of polyps and non-polyps in CTC. Our results show that the proposed method reduce the training time by a factor of 15, while maintaining the classification performance equivalent to the model trained using the full training set. We compare the performance using area under the receiveroperating- characteristic curve (AUC).
ERIC Educational Resources Information Center
Hales, Patrick Dean
2016-01-01
Mixed methods research becomes more utilized in education research every year. As this pluralist paradigm begins to take hold, it becomes more and more necessary to take a critical eye to studies making use of different mixed methods approaches. An area of education research that has yet struggled to find a foothold with mixed methodology is…
2013-05-20
NASA Cassini spacecraft takes full advantage of the sunlight to capture these amazing views of the north polar hexagon and myriad storms, large and small, that comprise the weather systems in the polar region.
50 CFR 600.1002 - General requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... moratorium on new entrants, restrictions on vessel upgrades, and other effort control measures, taking into account the full potential fishing capacity of the fleet; (2) Establish a specified or target total...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krotee, Pascal; Rodriguez, Jose A.; Sawaya, Michael R.
2017-01-03
hIAPP fibrils are associated with Type-II Diabetes, but the link of hIAPP structure to islet cell death remains elusive. Here we observe that hIAPP fibrils are cytotoxic to cultured pancreatic β-cells, leading us to determine the structure and cytotoxicity of protein segments composing the amyloid spine of hIAPP. Using the cryoEM method MicroED, we discover that one segment, 19–29 S20G, forms pairs of β-sheets mated by a dry interface that share structural features with and are similarly cytotoxic to full-length hIAPP fibrils. In contrast, a second segment, 15–25 WT, forms non-toxic labile β-sheets. These segments possess different structures and cytotoxicmore » effects, however, both can seed full-length hIAPP, and cause hIAPP to take on the cytotoxic and structural features of that segment. These results suggest that protein segment structures represent polymorphs of their parent protein and that segment 19–29 S20G may serve as a model for the toxic spine of hIAPP.« less
Sales, C; Cervera, M I; Gil, R; Portolés, T; Pitarch, E; Beltran, J
2017-02-01
The novel atmospheric pressure chemical ionization (APCI) source has been used in combination with gas chromatography (GC) coupled to hybrid quadrupole time-of-flight (QTOF) mass spectrometry (MS) for determination of volatile components of olive oil, enhancing its potential for classification of olive oil samples according to their quality using a metabolomics-based approach. The full-spectrum acquisition has allowed the detection of volatile organic compounds (VOCs) in olive oil samples, including Extra Virgin, Virgin and Lampante qualities. A dynamic headspace extraction with cartridge solvent elution was applied. The metabolomics strategy consisted of three different steps: a full mass spectral alignment of GC-MS data using MzMine 2.0, a multivariate analysis using Ez-Info and the creation of the statistical model with combinations of responses for molecular fragments. The model was finally validated using blind samples, obtaining an accuracy in oil classification of 70%, taking the official established method, "PANEL TEST", as reference. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nonadiabatic effects in electronic and nuclear dynamics
Bircher, Martin P.; Liberatore, Elisa; Browning, Nicholas J.; Brickel, Sebastian; Hofmann, Cornelia; Patoz, Aurélien; Unke, Oliver T.; Zimmermann, Tomáš; Chergui, Majed; Hamm, Peter; Keller, Ursula; Meuwly, Markus; Woerner, Hans-Jakob; Vaníček, Jiří; Rothlisberger, Ursula
2018-01-01
Due to their very nature, ultrafast phenomena are often accompanied by the occurrence of nonadiabatic effects. From a theoretical perspective, the treatment of nonadiabatic processes makes it necessary to go beyond the (quasi) static picture provided by the time-independent Schrödinger equation within the Born-Oppenheimer approximation and to find ways to tackle instead the full time-dependent electronic and nuclear quantum problem. In this review, we give an overview of different nonadiabatic processes that manifest themselves in electronic and nuclear dynamics ranging from the nonadiabatic phenomena taking place during tunnel ionization of atoms in strong laser fields to the radiationless relaxation through conical intersections and the nonadiabatic coupling of vibrational modes and discuss the computational approaches that have been developed to describe such phenomena. These methods range from the full solution of the combined nuclear-electronic quantum problem to a hierarchy of semiclassical approaches and even purely classical frameworks. The power of these simulation tools is illustrated by representative applications and the direct confrontation with experimental measurements performed in the National Centre of Competence for Molecular Ultrafast Science and Technology. PMID:29376108
Numerical Simulations of Self-Focused Pulses Using the Nonlinear Maxwell Equations
NASA Technical Reports Server (NTRS)
Goorjian, Peter M.; Silberberg, Yaron; Kwak, Dochan (Technical Monitor)
1994-01-01
This paper will present results in computational nonlinear optics. An algorithm will be described that solves the full vector nonlinear Maxwell's equations exactly without the approximations that are currently made. Present methods solve a reduced scalar wave equation, namely the nonlinear Schrodinger equation, and neglect the optical carrier. Also, results will be shown of calculations of 2-D electromagnetic nonlinear waves computed by directly integrating in time the nonlinear vector Maxwell's equations. The results will include simulations of 'light bullet' like pulses. Here diffraction and dispersion will be counteracted by nonlinear effects. The time integration efficiently implements linear and nonlinear convolutions for the electric polarization, and can take into account such quantum effects as Kerr and Raman interactions. The present approach is robust and should permit modeling 2-D and 3-D optical soliton propagation, scattering, and switching directly from the full-vector Maxwell's equations. Abstract of a proposed paper for presentation at the meeting NONLINEAR OPTICS: Materials, Fundamentals, and Applications, Hyatt Regency Waikaloa, Waikaloa, Hawaii, July 24-29, 1994, Cosponsored by IEEE/Lasers and Electro-Optics Society and Optical Society of America
Aircraft High-Lift Aerodynamic Analysis Using a Surface-Vorticity Solver
NASA Technical Reports Server (NTRS)
Olson, Erik D.; Albertson, Cindy W.
2016-01-01
This study extends an existing semi-empirical approach to high-lift analysis by examining its effectiveness for use with a three-dimensional aerodynamic analysis method. The aircraft high-lift geometry is modeled in Vehicle Sketch Pad (OpenVSP) using a newly-developed set of techniques for building a three-dimensional model of the high-lift geometry, and for controlling flap deflections using scripted parameter linking. Analysis of the low-speed aerodynamics is performed in FlightStream, a novel surface-vorticity solver that is expected to be substantially more robust and stable compared to pressure-based potential-flow solvers and less sensitive to surface perturbations. The calculated lift curve and drag polar are modified by an empirical lift-effectiveness factor that takes into account the effects of viscosity that are not captured in the potential-flow solution. Analysis results are validated against wind-tunnel data for The Energy-Efficient Transport AR12 low-speed wind-tunnel model, a 12-foot, full-span aircraft configuration with a supercritical wing, full-span slats, and part-span double-slotted flaps.
gpuPOM: a GPU-based Princeton Ocean Model
NASA Astrophysics Data System (ADS)
Xu, S.; Huang, X.; Zhang, Y.; Fu, H.; Oey, L.-Y.; Xu, F.; Yang, G.
2014-11-01
Rapid advances in the performance of the graphics processing unit (GPU) have made the GPU a compelling solution for a series of scientific applications. However, most existing GPU acceleration works for climate models are doing partial code porting for certain hot spots, and can only achieve limited speedup for the entire model. In this work, we take the mpiPOM (a parallel version of the Princeton Ocean Model) as our starting point, design and implement a GPU-based Princeton Ocean Model. By carefully considering the architectural features of the state-of-the-art GPU devices, we rewrite the full mpiPOM model from the original Fortran version into a new Compute Unified Device Architecture C (CUDA-C) version. We take several accelerating methods to further improve the performance of gpuPOM, including optimizing memory access in a single GPU, overlapping communication and boundary operations among multiple GPUs, and overlapping input/output (I/O) between the hybrid Central Processing Unit (CPU) and the GPU. Our experimental results indicate that the performance of the gpuPOM on a workstation containing 4 GPUs is comparable to a powerful cluster with 408 CPU cores and it reduces the energy consumption by 6.8 times.
NASA Technical Reports Server (NTRS)
Kobrick, Ryan L.; Klaus, David M.; Street, Kenneth W., Jr.
2010-01-01
A limitation has been identified in the existing test standards used for making controlled, two-body abrasion scratch measurements based solely on the width of the resultant score on the surface of the material. A new, more robust method is proposed for analyzing a surface scratch that takes into account the full three-dimensional profile of the displaced material. To accomplish this, a set of four volume displacement metrics are systematically defined by normalizing the overall surface profile to statistically denote the area of relevance, termed the Zone of Interaction (ZOI). From this baseline, depth of the trough and height of the ploughed material are factored into the overall deformation assessment. Proof of concept data were collected and analyzed to demonstrate the performance of this proposed methodology. This technique takes advantage of advanced imaging capabilities that now allow resolution of the scratched surface to be quantified in greater detail than was previously achievable. A quantified understanding of fundamental particle-material interaction is critical to anticipating how well components can withstand prolonged use in highly abrasive environments, specifically for our intended applications on the surface of the Moon and other planets or asteroids, as well as in similarly demanding, harsh terrestrial settings
Pernía Leal, M; Assali, M; Cid, J J; Valdivia, V; Franco, J M; Fernández, I; Pozo, D; Khiar, N
2015-12-07
To take full advantage of the remarkable applications of carbon nanotubes in different fields, there is a need to develop effective methods to improve their water dispersion and biocompatibility while maintaining their physical properties. In this sense, current approaches suffer from serious drawbacks such as loss of electronic structure together with low surface coverage in the case of covalent functionalizations, or instability of the dynamic hybrids obtained by non-covalent functionalizations. In the present work, we examined the molecular basis of an original strategy that combines the advantages of both functionalizations without their main drawbacks. The hierarchical self-assembly of diacetylenic-based neoglycolipids into highly organized and compacted rings around the nanotubes, followed by photopolymerization leads to the formation of nanotubes covered with glyconanorings with a shish kebab-type topology exposing the carbohydrate ligands to the water phase in a multivalent fashion. The glyconanotubes obtained are fully functional, and able to establish specific interactions with their cognate receptors. In fact, by taking advantage of this selective binding, an easy method to sense lectins as a working model of toxin detection was developed based on a simple analysis of TEM images. Remarkably, different experimental settings to assess cell membrane integrity, cell growth kinetics and cell cycle demonstrated the cellular biocompatibility of the sugar-coated carbon nanotubes compared to pristine single-walled carbon nanotubes.
TakeCARE, a Video to Promote Bystander Behavior on College Campuses: Replication and Extension.
Jouriles, Ernest N; Sargent, Kelli S; Salis, Katie Lee; Caiozzo, Christina; Rosenfield, David; Cascardi, Michele; Grych, John H; O'Leary, K Daniel; McDonald, Renee
2017-08-01
Previous research has demonstrated that college students who view TakeCARE, a video bystander program designed to encourage students to take action to prevent sexual and relationship violence (i.e., bystander behavior), display more bystander behavior relative to students who view a control video. The current study aimed to replicate and extend these findings by testing two different methods of administering TakeCARE and examining moderators of TakeCARE's effects on bystander behavior. Students at four universities ( n = 557) were randomly assigned to one of three conditions: (a) view TakeCARE in a monitored computer lab, (b) view TakeCARE at their own convenience after receiving an email link to the video, or (c) view a video about study skills (control group). Participants completed measures of bystander behavior at baseline and at a 1-month follow-up. Participants in both TakeCARE conditions reported more bystander behavior at follow-up assessments, compared with participants in the control condition. The beneficial effect of TakeCARE did not differ significantly across administration methods. However, the effects of TakeCARE on bystander behavior were moderated by students' perceptions of campus responsiveness to sexual violence, with more potent effects when students perceived their institution as responsive to reports of sexual violence.
Best Practices Guide for PPGs and the States
The guide is designed to help the U.S. Environmental Protection Agency (EPA) and stateofficials understand and take full advantage of the features and benefits of PerformancePartnership Grants (PPGs).
Design and simulation of GaN based Schottky betavoltaic nuclear micro-battery.
San, Haisheng; Yao, Shulin; Wang, Xiang; Cheng, Zaijun; Chen, Xuyuan
2013-10-01
The current paper presents a theoretical analysis of Ni-63 nuclear micro-battery based on a wide-band gap semiconductor GaN thin-film covered with thin Ni/Au films to form Schottky barrier for carrier separation. The total energy deposition in GaN was calculated using Monte Carlo methods by taking into account the full beta spectral energy, which provided an optimal design on Schottky barrier width. The calculated results show that an 8 μm thick Schottky barrier can collect about 95% of the incident beta particle energy. Considering the actual limitations of current GaN growth technique, a Fe-doped compensation technique by MOCVD method can be used to realize the n-type GaN with a carrier concentration of 1×10(15) cm(-3), by which a GaN based Schottky betavoltaic micro-battery can achieve an energy conversion efficiency of 2.25% based on the theoretical calculations of semiconductor device physics. Copyright © 2013 Elsevier Ltd. All rights reserved.
PyDREAM: high-dimensional parameter inference for biological models in python.
Shockley, Erin M; Vrugt, Jasper A; Lopez, Carlos F; Valencia, Alfonso
2018-02-15
Biological models contain many parameters whose values are difficult to measure directly via experimentation and therefore require calibration against experimental data. Markov chain Monte Carlo (MCMC) methods are suitable to estimate multivariate posterior model parameter distributions, but these methods may exhibit slow or premature convergence in high-dimensional search spaces. Here, we present PyDREAM, a Python implementation of the (Multiple-Try) Differential Evolution Adaptive Metropolis [DREAM(ZS)] algorithm developed by Vrugt and ter Braak (2008) and Laloy and Vrugt (2012). PyDREAM achieves excellent performance for complex, parameter-rich models and takes full advantage of distributed computing resources, facilitating parameter inference and uncertainty estimation of CPU-intensive biological models. PyDREAM is freely available under the GNU GPLv3 license from the Lopez lab GitHub repository at http://github.com/LoLab-VU/PyDREAM. c.lopez@vanderbilt.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Enabling scientific workflows in virtual reality
Kreylos, O.; Bawden, G.; Bernardin, T.; Billen, M.I.; Cowgill, E.S.; Gold, R.D.; Hamann, B.; Jadamec, M.; Kellogg, L.H.; Staadt, O.G.; Sumner, D.Y.
2006-01-01
To advance research and improve the scientific return on data collection and interpretation efforts in the geosciences, we have developed methods of interactive visualization, with a special focus on immersive virtual reality (VR) environments. Earth sciences employ a strongly visual approach to the measurement and analysis of geologic data due to the spatial and temporal scales over which such data ranges, As observations and simulations increase in size and complexity, the Earth sciences are challenged to manage and interpret increasing amounts of data. Reaping the full intellectual benefits of immersive VR requires us to tailor exploratory approaches to scientific problems. These applications build on the visualization method's strengths, using both 3D perception and interaction with data and models, to take advantage of the skills and training of the geological scientists exploring their data in the VR environment. This interactive approach has enabled us to develop a suite of tools that are adaptable to a range of problems in the geosciences and beyond. Copyright ?? 2008 by the Association for Computing Machinery, Inc.
A modified belief entropy in Dempster-Shafer framework.
Zhou, Deyun; Tang, Yongchuan; Jiang, Wen
2017-01-01
How to quantify the uncertain information in the framework of Dempster-Shafer evidence theory is still an open issue. Quite a few uncertainty measures have been proposed in Dempster-Shafer framework, however, the existing studies mainly focus on the mass function itself, the available information represented by the scale of the frame of discernment (FOD) in the body of evidence is ignored. Without taking full advantage of the information in the body of evidence, the existing methods are somehow not that efficient. In this paper, a modified belief entropy is proposed by considering the scale of FOD and the relative scale of a focal element with respect to FOD. Inspired by Deng entropy, the new belief entropy is consistent with Shannon entropy in the sense of probability consistency. What's more, with less information loss, the new measure can overcome the shortage of some other uncertainty measures. A few numerical examples and a case study are presented to show the efficiency and superiority of the proposed method.
A modified belief entropy in Dempster-Shafer framework
Zhou, Deyun; Jiang, Wen
2017-01-01
How to quantify the uncertain information in the framework of Dempster-Shafer evidence theory is still an open issue. Quite a few uncertainty measures have been proposed in Dempster-Shafer framework, however, the existing studies mainly focus on the mass function itself, the available information represented by the scale of the frame of discernment (FOD) in the body of evidence is ignored. Without taking full advantage of the information in the body of evidence, the existing methods are somehow not that efficient. In this paper, a modified belief entropy is proposed by considering the scale of FOD and the relative scale of a focal element with respect to FOD. Inspired by Deng entropy, the new belief entropy is consistent with Shannon entropy in the sense of probability consistency. What’s more, with less information loss, the new measure can overcome the shortage of some other uncertainty measures. A few numerical examples and a case study are presented to show the efficiency and superiority of the proposed method. PMID:28481914
Check, J H
2012-01-01
To present reasons for luteal phase deficiency when taking controlled ovarian hyperstimulation (COH) for purposes of inducing multiple oocytes for in vitro fertilization (IVF), and to suggest strategies to overcome the defect. Treatment options presented include luteal phase support with human chorionic gonadotropin (hCG) injection, progesterone, estradiol, gonadotropin releasing hormone agonists, cytokines, e.g., granulocyte colony stimulating factor, and lymphocyte immunotherapy. hCG and progesterone produce the best results and are comparable or at best a slight edge to hCG but the latter is associated with too high a risk for ovarian hyperstimulation syndrome. Vaginal progesterone is the most efficacious with the least side-effects. Better methods are needed to adequately assess full correction of the luteal phase defect. In some cases the luteal phase defect associated with COH is not correctable and FSH stimulation should be reduced or all embryos frozen and defer transfer to an artificial estrogen progesterone or natural cycle.
NASA Astrophysics Data System (ADS)
Zheng, Qiang; Li, Honglun; Fan, Baode; Wu, Shuanhu; Xu, Jindong
2017-12-01
Active contour model (ACM) has been one of the most widely utilized methods in magnetic resonance (MR) brain image segmentation because of its ability of capturing topology changes. However, most of the existing ACMs only consider single-slice information in MR brain image data, i.e., the information used in ACMs based segmentation method is extracted only from one slice of MR brain image, which cannot take full advantage of the adjacent slice images' information, and cannot satisfy the local segmentation of MR brain images. In this paper, a novel ACM is proposed to solve the problem discussed above, which is based on multi-variate local Gaussian distribution and combines the adjacent slice images' information in MR brain image data to satisfy segmentation. The segmentation is finally achieved through maximizing the likelihood estimation. Experiments demonstrate the advantages of the proposed ACM over the single-slice ACM in local segmentation of MR brain image series.
Short-Range Vital Signs Sensing Based on EEMD and CWT Using IR-UWB Radar.
Hu, Xikun; Jin, Tian
2016-11-30
The radar sensor described realizes healthcare monitoring capable of detecting subject chest-wall movement caused by cardiopulmonary activities and wirelessly estimating the respiration and heartbeat rates of the subject without attaching any devices to the body. Conventional single-tone Doppler radar can only capture Doppler signatures because of a lack of bandwidth information with noncontact sensors. In contrast, we take full advantage of impulse radio ultra-wideband (IR-UWB) radar to achieve low power consumption and convenient portability, with a flexible detection range and desirable accuracy. A noise reduction method based on improved ensemble empirical mode decomposition (EEMD) and a vital sign separation method based on the continuous-wavelet transform (CWT) are proposed jointly to improve the signal-to-noise ratio (SNR) in order to acquire accurate respiration and heartbeat rates. Experimental results illustrate that respiration and heartbeat signals can be extracted accurately under different conditions. This noncontact healthcare sensor system proves the commercial feasibility and considerable accessibility of using compact IR-UWB radar for emerging biomedical applications.
Biological Response to the Dynamic Spectral-Polarized Underwater Light Field
2012-09-30
deployment of a comprehensive optical suite including underwater video- polarimetry (full Stokes vector video-imaging camera custom-built Cummings; and...During field operations, we couple polarimetry measurements of live, free-swimming animals in their environments with a full suite of optical...Seibel, Ahmed). We also restrain live, awake animals to take polarimetry measurements (in the field and laboratory) under a complete set of
Gaze-Following and Awareness of Visual Perspective in Chimpanzees
2009-07-01
visual search, provides an important developmental step towards the development of theory -of- mind (Baron-Cohen, 1995; Butterworth, 1991...conditions most children will eventually develop a full theory of mind and have full visual perspective taking (Corkum & Moore, 1995,1998; Moll...J., Bothell, D., Byrne, M., Douglass, S ., Lebiere, C., & Qin, Y. (2004). An integrated theory of the mind. Psychological Review, 111 (4), 1036-1060
NASA Astrophysics Data System (ADS)
Brossier, Romain; Zhou, Wei; Operto, Stéphane; Virieux, Jean
2015-04-01
Full Waveform Inversion (FWI) is an appealing method for quantitative high-resolution subsurface imaging (Virieux et al., 2009). For crustal-scales exploration from surface seismic, FWI generally succeeds in recovering a broadband of wavenumbers in the shallow part of the targeted medium taking advantage of the broad scattering-angle provided by both reflected and diving waves. In contrast, deeper targets are often only illuminated by short-spread reflections, which favor the reconstruction of the short wavelengths at the expense of the longer ones, leading to a possible notch in the intermediate part of the wavenumber spectrum. To update the velocity macromodel from reflection data, image-domain strategies (e.g., Symes & Carazzone, 1991) aim to maximize a semblance criterion in the migrated domain. Alternatively, recent data-domain strategies (e.g., Xu et al., 2012, Ma & Hale, 2013, Brossier et al., 2014), called Reflection FWI (RFWI), inspired by Chavent et al. (1994), rely on a scale separation between the velocity macromodel and prior knowledge of the reflectivity to emphasize the transmission regime in the sensitivity kernel of the inversion. However, all these strategies focus on reflected waves only, discarding the low-wavenumber information carried out by diving waves. With the current development of very long-offset and wide-azimuth acquisitions, a significant part of the recorded energy is provided by diving waves and subcritical reflections, and high-resolution tomographic methods should take advantage of all types of waves. In this presentation, we will first review the issues of classical FWI when applied to reflected waves and how RFWI is able to retrieve the long wavelength of the model. We then propose a unified formulation of FWI (Zhou et al., 2014) to update the low wavenumbers of the velocity model by the joint inversion of diving and reflected arrivals, while the impedance model is updated thanks to reflected wave only. An alternate inversion of high wavenumber impedance model and low wavenumber velocity model is performed to iteratively improve subsurface models. References : Brossier, R., Operto, S. & Virieux, J., 2014. Velocity model building from seismic reflection data by full waveform inversion, Geophysical Prospecting, doi:10.1111/1365-2478.12190 Chavent, G., Clément, F. & Gomez, S., 1994.Automatic determination of velocities via migration-based traveltime waveform inversion: A synthetic data example, SEG Technical Program Expanded Abstracts 1994, pp. 1179--1182. Ma, Y. & Hale, D., 2013. Wave-equation reflection traveltime inversion with dynamic warping and full waveform inversion, Geophysics, 78(6), R223--R233. Symes, W.W. & Carazzone, J.J., 1991. Velocity inversion by differential semblance optimization, Geophysics, 56, 654--663. Virieux, J. & Operto, S., 2009. An overview of full waveform inversion in exploration geophysics, Geophysics, 74(6), WCC1--WCC26. Xu, S., Wang, D., Chen, F., Lambaré, G. & Zhang, Y., 2012. Inversion on reflected seismic wave, SEG Technical Program Expanded Abstracts 2012, pp. 1--7. Zhou, W., Brossier, R., Operto, S., & Virieux, J., 2014. Acoustic multiparameter full-waveform inversion through a hierachical scheme, in SEG Technical Program Expanded Abstracts 2014, pp. 1249--1253
Verification of Ceramic Structures
NASA Astrophysics Data System (ADS)
Behar-Lafenetre, Stephanie; Cornillon, Laurence; Rancurel, Michael; De Graaf, Dennis; Hartmann, Peter; Coe, Graham; Laine, Benoit
2012-07-01
In the framework of the “Mechanical Design and Verification Methodologies for Ceramic Structures” contract [1] awarded by ESA, Thales Alenia Space has investigated literature and practices in affiliated industries to propose a methodological guideline for verification of ceramic spacecraft and instrument structures. It has been written in order to be applicable to most types of ceramic or glass-ceramic materials - typically Cesic®, HBCesic®, Silicon Nitride, Silicon Carbide and ZERODUR®. The proposed guideline describes the activities to be performed at material level in order to cover all the specific aspects of ceramics (Weibull distribution, brittle behaviour, sub-critical crack growth). Elementary tests and their post-processing methods are described, and recommendations for optimization of the test plan are given in order to have a consistent database. The application of this method is shown on an example in a dedicated article [7]. Then the verification activities to be performed at system level are described. This includes classical verification activities based on relevant standard (ECSS Verification [4]), plus specific analytical, testing and inspection features. The analysis methodology takes into account the specific behaviour of ceramic materials, especially the statistical distribution of failures (Weibull) and the method to transfer it from elementary data to a full-scale structure. The demonstration of the efficiency of this method is described in a dedicated article [8]. The verification is completed by classical full-scale testing activities. Indications about proof testing, case of use and implementation are given and specific inspection and protection measures are described. These additional activities are necessary to ensure the required reliability. The aim of the guideline is to describe how to reach the same reliability level as for structures made of more classical materials (metals, composites).
ERIC Educational Resources Information Center
Allan, Alexandra; Tinkler, Penny
2015-01-01
A small number of attempts have been made to take stock of the field of gender and education, though very few have taken methodology as their explicit focus. We seek to stimulate such discussion in this article by taking stock of the use of visual methods in gender and education research (particularly participatory and image-based methods). We…
Objectivity in psychosocial measurement: what, why, how.
Fisher, W P
2000-01-01
This article raises and tries to answer questions concerning what objectivity in psychosocial measurement is, why it is important, and how it can be achieved. Following in the tradition of the Socratic art of maiuetics, objectivity is characterized by the separation of meaning from the geometric, metaphoric, or numeric figure carrying it, allowing an ideal and abstract entity to take on a life of its own. Examples of objective entities start from anything teachable and learnable, but for the purposes of measurement, the meter, gram, volt, and liter are paradigmatic because of their generalizability across observers, instruments, laboratories, samples, applications, etc. Objectivity is important because it is only through it that distinct conceptual entities are meaningfully distinguished. Seen from another angle, objectivity is important because it defines the conditions of the possibility of shared meaning and community. Full objectivity in psychosocial measurement can be achieved only by attending to both its methodological and its social aspects. The methodological aspect has recently achieved some notice in psychosocial measurement, especially in the form of Rasch's probabilistic conjoint models. Objectivity's social aspect has only recently been noticed by historians of science, and has not yet been systematically incorporated in any psychosocial science. An approach to achieving full objectivity in psychosocial measurement is adapted from the ASTM Standard Practice for Conducting an Interlaboratory Study to Determine the Precision of a Test Method (ASTM Committee E-11 on Statistical Methods, 1992).
NASA Astrophysics Data System (ADS)
Dubot, Pierre; Boisseau, Nicolas; Cenedese, Pierre
2018-05-01
Large biomolecule interaction with oxide surface has attracted a lot of attention because it drives behavior of implanted devices in the living body. To investigate the role of TiO2 surface structure on a large polypeptide (insulin) adsorption, we use a homemade mixed Molecular Dynamics-Full large scale Quantum Mechanics code. A specific re-parameterized (Ti) and globally convergent NDDO method fitted on high level ab initio method (coupled cluster CCSD(T) and DFT) allows us to safely describe the electronic structure of the whole insulin-TiO2 surface system (up to 4000 atoms). Looking specifically at carboxylate residues, we demonstrate in this work that specific interfacial bonds are obtained from the insulin/TiO2 system that are not observed in the case of smaller peptides (tripeptides, insulin segment chains with different configurations). We also demonstrate that a large part of the adsorption energy is compensated by insulin conformational energy changes and surface defects enhanced this trend. Large slab dimensions allow us to take into account surface defects that are actually beyond ab initio capabilities owing to size effect. These results highlight the influence of the surface structure on the conformation and therefore of the possible inactivity of an adsorbed polypeptides.
Hierarchical screening for multiple mental disorders.
Batterham, Philip J; Calear, Alison L; Sunderland, Matthew; Carragher, Natacha; Christensen, Helen; Mackinnon, Andrew J
2013-10-01
There is a need for brief, accurate screening when assessing multiple mental disorders. Two-stage hierarchical screening, consisting of brief pre-screening followed by a battery of disorder-specific scales for those who meet diagnostic criteria, may increase the efficiency of screening without sacrificing precision. This study tested whether more efficient screening could be gained using two-stage hierarchical screening than by administering multiple separate tests. Two Australian adult samples (N=1990) with high rates of psychopathology were recruited using Facebook advertising to examine four methods of hierarchical screening for four mental disorders: major depressive disorder, generalised anxiety disorder, panic disorder and social phobia. Using K6 scores to determine whether full screening was required did not increase screening efficiency. However, pre-screening based on two decision tree approaches or item gating led to considerable reductions in the mean number of items presented per disorder screened, with estimated item reductions of up to 54%. The sensitivity of these hierarchical methods approached 100% relative to the full screening battery. Further testing of the hierarchical screening approach based on clinical criteria and in other samples is warranted. The results demonstrate that a two-phase hierarchical approach to screening multiple mental disorders leads to considerable increases efficiency gains without reducing accuracy. Screening programs should take advantage of prescreeners based on gating items or decision trees to reduce the burden on respondents. © 2013 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell
2012-01-01
The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.
2013-01-01
The accelerated molecular dynamics (aMD) method has recently been shown to enhance the sampling of biomolecules in molecular dynamics (MD) simulations, often by several orders of magnitude. Here, we describe an implementation of the aMD method for the OpenMM application layer that takes full advantage of graphics processing units (GPUs) computing. The aMD method is shown to work in combination with the AMOEBA polarizable force field (AMOEBA-aMD), allowing the simulation of long time-scale events with a polarizable force field. Benchmarks are provided to show that the AMOEBA-aMD method is efficiently implemented and produces accurate results in its standard parametrization. For the BPTI protein, we demonstrate that the protein structure described with AMOEBA remains stable even on the extended time scales accessed at high levels of accelerations. For the DNA repair metalloenzyme endonuclease IV, we show that the use of the AMOEBA force field is a significant improvement over fixed charged models for describing the enzyme active-site. The new AMOEBA-aMD method is publicly available (http://wiki.simtk.org/openmm/VirtualRepository) and promises to be interesting for studying complex systems that can benefit from both the use of a polarizable force field and enhanced sampling. PMID:24634618
Yang, Tao; Wu, Xuewen; Peng, Xiaofei; Zhang, Yanni; Xie, Shaobing; Sun, Hong
2016-11-01
Tympanoplasty using cartilage grafts has a better graft take rate than that using temporalis fascia grafts. There are no significant differences between cartilage grafts and temporalis fascia grafts for hearing outcomes. Contrary to the sliced cartilage sub-group, full-thickness cartilage grafts generate better hearing outcomes than temporalis fascia grafts. Tympanic membrane perforation can cause middle ear relapsing infection and lead to hearing damage. Various techniques have been applied in order to reconstruct the tympanic membrane. Recently, cartilage grafts and temporalis fascia grafts have been widely used for tympanic membrane closure. A systemic review and meta-analysis was carried out based on published retrospective trials that investigated the efficacy of cartilage grafts and temporalis fascia grafts in type 1 tympanoplasty. Both graft take rates and mean AIR-BONE-GAP gains were analyzed. Cochrane Library, PubMed, and Embase were systematically searched. After a scientific investigation, we extracted the relevant data following our selection criteria. Odds ratio (OR) of graft take rates and mean difference (MD) of AIR-BONE-GAP gains were calculated within 95% confidence intervals. Eight eligible articles with 915 patients were reviewed. The pooled OR for graft take rate was 3.11 (95% CI =1.94-5.00; p = 0.43) and the difference between the two groups was significant, which means that the cartilage grafts group got a better graft take rate than the temporalis fascia grafts group. The pooled MD for mean AIR-BONE-GAP gain was 1.92 (95% CI = -0.12-3.95; p < 0.000 01) and the difference was not significant. However, in the full thickness cartilage grafts sub-group, the pooled MD for mean AIR-BONE-GAP gains was 2.56 (95% CI =1.02-4.10; p = 0.14) and the difference was significant, which means that the full thickness cartilage grafts sub-group got a better hearing outcome than the temporalis fascia grafts group. On the contrary, the pooled MD of sliced cartilage grafts sub-group was 0.12 (95% CI = -0.44-0.69; p = 0.61) and there was no significant difference between the sliced cartilage grafts and temporalis fascia group.
50 CFR 218.31 - Permissible methods of taking.
Code of Federal Regulations, 2012 CFR
2012-10-01
...); (x) Melon-headed whales (Peponocephala electra)—100 (an average of 20 annually); (xi) False killer... annually); (xiv) Pygmy killer whale (Ferresa attenuatta)—50 (an average of 10 annually); (xv) Rough-toothed... method of take and the indicated number of times: (1) Level B Harassment: (i) Sperm whale (Physeter...
50 CFR 218.31 - Permissible methods of taking.
Code of Federal Regulations, 2011 CFR
2011-10-01
...); (x) Melon-headed whales (Peponocephala electra)—100 (an average of 20 annually); (xi) False killer... annually); (xiv) Pygmy killer whale (Ferresa attenuatta)—50 (an average of 10 annually); (xv) Rough-toothed... method of take and the indicated number of times: (1) Level B Harassment: (i) Sperm whale (Physeter...
50 CFR 217.142 - Permissible methods of taking.
Code of Federal Regulations, 2014 CFR
2014-10-01
... method and amount of take: (1) Level B Harassment: (i) Cetaceans: (A) Bowhead whale (Balaena mysticetus)—75 (an average of 15 annually) (B) Gray whale (Eschrichtius robustus)—10 (an average of 2 annually) (C) Beluga whale (Delphinapterus leucas)—100 (an average of 20 annually) (ii) Pinnipeds: (A) Ringed...
Skin graft fixation in severe burns: use of topical negative pressure.
Kamolz, L P; Lumenta, D B; Parvizi, D; Wiedner, M; Justich, I; Keck, M; Pfurtscheller, K; Schintler, M
2014-09-30
Over the last 50 years, the evolution of burn care has led to a significant decrease in mortality. The biggest impact on survival has been the change in the approach to burn surgery. Early excision and grafting has become a standard of care for the majority of patients with deep burns; the survival of a given patient suffering from major burns is invariably linked to the take rate and survival of skin grafts. The application of topical negative pressure (TNP) therapy devices has demonstrated improved graft take in comparison to conventional dressing methods alone. The aim of this study was to analyze the impact of TNP therapy on skin graft fixation in large burns. In all patients, we applied TNP dressings covering a %TBSA of >25. The following parameters were recorded and documented using BurnCase 3D: age, gender, %TBSA, burn depth, hospital length-of-stay, Baux score, survival, as well as duration and incidence of TNP dressings. After a burn depth adapted wound debridement, coverage was simultaneously performed using split-thickness skin grafts, which were fixed with staples and covered with fatty gauzes and TNP foam. The TNP foam was again fixed with staples to prevent displacement and finally covered with the supplied transparent adhesive film. A continuous subatmospheric pressure between 75-120 mm Hg was applied (VAC®, KCI, Vienna, Austria). The first dressing change was performed on day 4. Thirty-six out of 37 patients, suffering from full thickness burns, were discharged with complete wound closure; only one patient succumbed to their injuries. The overall skin graft take rate was over 95%. In conclusion, we consider that split thickness skin graft fixation by TNP is an efficient method in major burns, notably in areas with irregular wound surfaces or subject to movement (e.g. joint proximity), and is worth considering for the treatment of aged patients.
Longer term consequences of the Short Take-Off and Landing (STOL) aircraft system
NASA Technical Reports Server (NTRS)
Laporte, T. R.
1972-01-01
An assessment of the STOL aircraft and the various means of employing it are discussed in the light of a research study to evaluate the efficacy of such analyses. It was determined that current approaches to assessment are generally inadequate for investigating the full social consequences of implementing a new technology. It is stated that a meaningful methodology of technology assessment must reflect mechanisms underlying the relationship of technology to social change. Interrelated methods which are discussed are: (1) gaming and simulation as heurisitic approaches in analysis and inquiry, (2) long range planning and questions of the future, (3) planning theory as a background for critical analysis of policy planning, and (4) social theory, with particular emphasis on social change and systems theories.
Robotic inspection of fiber reinforced composites using phased array UT
NASA Astrophysics Data System (ADS)
Stetson, Jeffrey T.; De Odorico, Walter
2014-02-01
Ultrasound is the current NDE method of choice to inspect large fiber reinforced airframe structures. Over the last 15 years Cartesian based scanning machines using conventional ultrasound techniques have been employed by all airframe OEMs and their top tier suppliers to perform these inspections. Technical advances in both computing power and commercially available, multi-axis robots now facilitate a new generation of scanning machines. These machines use multiple end effector tools taking full advantage of phased array ultrasound technologies yielding substantial improvements in inspection quality and productivity. This paper outlines the general architecture for these new robotic scanning systems as well as details the variety of ultrasonic techniques available for use with them including advances such as wide area phased array scanning and sound field adaptation for non-flat, non-parallel surfaces.
Synthesis and characterization of hydrogen-bond acidic functionalized graphene
NASA Astrophysics Data System (ADS)
Yang, Liu; Han, Qiang; Pan, Yong; Cao, Shuya; Ding, Mingyu
2014-05-01
Hexafluoroisopropanol phenyl group functionalized materials have great potential in the application of gas-sensitive materials for nerve agent detection, due to the formation of strong hydrogen-bonding interactions between the group and the analytes. In this paper, take full advantage of ultra-large specific surface area and plenty of carbon-carbon double bonds and hexafluoroisopropanol phenyl functionalized graphene was synthesized through in situ diazonium reaction between -C=C- and p-hexafluoroisopropanol aniline. The identity of the as-synthesis material was confirmed by transmission electron microscopy, Raman spectroscopy, ultraviolet visible spectroscopy, X-ray photoelectron spectroscopy and thermo gravimetric analysis. The synthesis method is simply which retained the excellent physical properties of original graphene. In addition, the novel material can be assigned as an potential candidate for gas sensitive materials towards organophosphorus nerve agent detection.
Quadratic String Method for Locating Instantons in Tunneling Splitting Calculations.
Cvitaš, Marko T
2018-03-13
The ring-polymer instanton (RPI) method is an efficient technique for calculating approximate tunneling splittings in high-dimensional molecular systems. In the RPI method, tunneling splitting is evaluated from the properties of the minimum action path (MAP) connecting the symmetric wells, whereby the extensive sampling of the full potential energy surface of the exact quantum-dynamics methods is avoided. Nevertheless, the search for the MAP is usually the most time-consuming step in the standard numerical procedures. Recently, nudged elastic band (NEB) and string methods, originaly developed for locating minimum energy paths (MEPs), were adapted for the purpose of MAP finding with great efficiency gains [ J. Chem. Theory Comput. 2016 , 12 , 787 ]. In this work, we develop a new quadratic string method for locating instantons. The Euclidean action is minimized by propagating the initial guess (a path connecting two wells) over the quadratic potential energy surface approximated by means of updated Hessians. This allows the algorithm to take many minimization steps between the potential/gradient calls with further reductions in the computational effort, exploiting the smoothness of potential energy surface. The approach is general, as it uses Cartesian coordinates, and widely applicable, with computational effort of finding the instanton usually lower than that of determining the MEP. It can be combined with expensive potential energy surfaces or on-the-fly electronic-structure methods to explore a wide variety of molecular systems.
Xiong, Jianyin; Yao, Yuan; Zhang, Yinping
2011-04-15
The initial emittable concentration (C(m,0)), the diffusion coefficient (D(m)), and the material/air partition coefficient (K) are the three characteristic parameters influencing emissions of formaldehyde and volatile organic compounds (VOCs) from building materials or furniture. It is necessary to determine these parameters to understand emission characteristics and how to control them. In this paper we develop a new method, the C-history method for a closed chamber, to measure these three parameters. Compared to the available methods of determining the three parameters described in the literature, our approach has the following salient features: (1) the three parameters can be simultaneously obtained; (2) it is time-saving, generally taking less than 3 days for the cases studied (the available methods tend to need 7-28 days); (3) the maximum relative standard deviations of the measured C(m,0), D(m) and K are 8.5%, 7.7%, and 9.8%, respectively, which are acceptable for engineering applications. The new method was validated by using the characteristic parameters determined in the closed chamber experiment to predict the observed emissions in a ventilated full scale chamber experiment, proving that the approach is reliable and convincing. Our new C-history method should prove useful for rapidly determining the parameters required to predict formaldehyde and VOC emissions from building materials as well as for furniture labeling.
Teleseismic tomography for imaging Earth's upper mantle
NASA Astrophysics Data System (ADS)
Aktas, Kadircan
Teleseismic tomography is an important imaging tool in earthquake seismology, used to characterize lithospheric structure beneath a region of interest. In this study I investigate three different tomographic techniques applied to real and synthetic teleseismic data, with the aim of imaging the velocity structure of the upper mantle. First, by applying well established traveltime tomographic techniques to teleseismic data from southern Ontario, I obtained high-resolution images of the upper mantle beneath the lower Great Lakes. Two salient features of the 3D models are: (1) a patchy, NNW-trending low-velocity region, and (2) a linear, NE-striking high-velocity anomaly. I interpret the high-velocity anomaly as a possible relict slab associated with ca. 1.25 Ga subduction, whereas the low-velocity anomaly is interpreted as a zone of alteration and metasomatism associated with the ascent of magmas that produced the Late Cretaceous Monteregian plutons. The next part of the thesis is concerned with adaptation of existing full-waveform tomographic techniques for application to teleseismic body-wave observations. The method used here is intended to be complementary to traveltime tomography, and to take advantage of efficient frequency-domain methodologies that have been developed for inverting large controlled-source datasets. Existing full-waveform acoustic modelling and inversion codes have been modified to handle plane waves impinging from the base of the lithospheric model at a known incidence angle. A processing protocol has been developed to prepare teleseismic observations for the inversion algorithm. To assess the validity of the acoustic approximation, the processing procedure and modelling-inversion algorithm were tested using synthetic seismograms computed using an elastic Kirchhoff integral method. These tests were performed to evaluate the ability of the frequency-domain full-waveform inversion algorithm to recover topographic variations of the Moho under a variety of realistic scenarios. Results show that frequency-domain full-waveform tomography is generally successful in recovering both sharp and discontinuous features. Thirdly, I developed a new method for creating an initial background velocity model for the inversion algorithm, which is sufficiently close to the true model so that convergence is likely to be achieved. I adapted a method named Deformable Layer Tomography (DLT), which adjusts interfaces between layers rather than velocities within cells. I applied this method to a simple model comprising a single uniform crustal layer and a constant-velocity mantle, separated by an irregular Moho interface. A series of tests was performed to evaluate the sensitivity of the DLT algorithm; the results show that my algorithm produces useful results within a realistic range of incident-wave obliquity, incidence angle and signal-to-noise level. Keywords. Teleseismic tomography, full waveform tomography, deformable layer tomography, lower Great Lakes, crust and upper mantle.
ERIC Educational Resources Information Center
Environ Planning Design, 1970
1970-01-01
Floor plans and photographs illustrate a description of the Samuel C. Williams Library at Stevens Institute of Technology, Hoboken, N.J. The unusual interior design allows students to take full advantage of the library's resources. (JW)
On Postgleadowian Thermochronology (Invited)
NASA Astrophysics Data System (ADS)
Harrison, M.
2013-12-01
Given that Andrew Gleadow was one of the earliest pioneers of thermochronology, his retirement is a testament to the maturity of our field. When Andy submitted his Ph.D. thesis in 1974, it would still be a year before Dodson (1973) received its first citation and seven until the word thermochronology appeared in print. The steady growth of the thermochronological literature through the 1980s was in good measure due to Andy having put the fission track method on a sound footing so it's entirely fitting that he should cap his career by realizing his early vision of fully-automated dating. However, by some measures, the field of thermochronology has stagnated over the past two decades. Did we reach steady state in ca. 1990 and is Andy's retirement a harbinger of an inevitable decline or are advances, such as automated fission track dating, spurring a renaissance? The answer to both questions may be yes. That part of our field that has largely overlooked the need for kinetic calibrations but instead relied on 'nominal closure temperature' conventions is becoming increasingly irrelevant, as witnessed by the pages of our leading journals. On the brighter side, thermochronometers for which customized Arrhenius relationships come as a by-product of the dating process (e.g., U+Th/He systems, 40Ar/39Ar MDD analysis) are increasingly being used to constrain multivariate thermomechanical models that can lead to unprecedented insights into otherwise unknowable parameters, such as paleotopography, fault slip-rate and ramp geometry, and crustal heat generation. However, inverse modelers have not yet developed the capacity to take advantage of the full spectrum of thermochronological data available from methods that reveal continuous thermal history information, largely due to computational limitations. To realize the full promise of thermochronology, the future Andy Gleadow's of our field will have to include those who pursue the full integration of methods for which internal tests of closure assumptions are possible, and link those high resolution thermal histories with a generation of increasingly capable inverse models.
The Impact of Novice Counselors' Note-Taking Behavior on Recall and Judgment
ERIC Educational Resources Information Center
Lo, Chu-Ling; Wadsworth, John
2014-01-01
Purpose: To examine the effect of note-taking on novice counselors' recall and judgment of interview information in four situations: no notes, taking notes, taking notes and reviewing these notes, and reviewing notes taken by others. Method: The sample included 13 counselors-in-training recruited from a master's level training program in…
ERIC Educational Resources Information Center
Belisle, Jordan; Dixon, Mark R.; Stanley, Caleb R.; Munoz, Bridget; Daar, Jacob H.
2016-01-01
We taught basic perspective-taking tasks to 3 children with autism and evaluated their ability to derive mutually entailed single-reversal deictic relations of those newly established perspective-taking skills. Furthermore, we examined the possibility of transfers of perspective-taking function to novel untrained stimuli. The methods were taken…
NASA Astrophysics Data System (ADS)
Iwaki, A.; Fujiwara, H.
2012-12-01
Broadband ground motion computations of scenario earthquakes are often based on hybrid methods that are the combinations of deterministic approach in lower frequency band and stochastic approach in higher frequency band. Typical computation methods for low-frequency and high-frequency (LF and HF, respectively) ground motions are the numerical simulations, such as finite-difference and finite-element methods based on three-dimensional velocity structure model, and the stochastic Green's function method, respectively. In such hybrid methods, LF and HF wave fields are generated through two different methods that are completely independent of each other, and are combined at the matching frequency. However, LF and HF wave fields are essentially not independent as long as they are from the same event. In this study, we focus on the relation among acceleration envelopes at different frequency bands, and attempt to synthesize HF ground motion using the information extracted from LF ground motion, aiming to propose a new method for broad-band strong motion prediction. Our study area is Kanto area, Japan. We use the K-NET and KiK-net surface acceleration data and compute RMS envelope at four frequency bands: 0.5-1.0 Hz, 1.0-2.0 Hz, 2.0-4.0 Hz, .0-8.0 Hz, and 8.0-16.0 Hz. Taking the ratio of the envelopes of adjacent bands, we find that the envelope ratios have stable shapes at each site. The empirical envelope-ratio characteristics are combined with low-frequency envelope of the target earthquake to synthesize HF ground motion. We have applied the method to M5-class earthquakes and a M7 target earthquake that occurred in the vicinity of Kanto area, and successfully reproduced the observed HF ground motion of the target earthquake. The method can be applied to a broad band ground motion simulation for a scenario earthquake by combining numerically-computed low-frequency (~1 Hz) ground motion with the empirical envelope ratio characteristics to generate broadband ground motion. The strengths of the proposed method are that: 1) it is based on observed ground motion characteristics, 2) it takes full advantage of precise velocity structure model, and 3) it is simple and easy to apply.
Adler, Ubiratan C.; Krüger, Stephanie; Teut, Michael; Lüdtke, Rainer; Schützler, Lena; Martins, Friederike; Willich, Stefan N.; Linde, Klaus; Witt, Claudia M.
2013-01-01
Background The specific clinical benefit of the homeopathic consultation and of homeopathic remedies in patients with depression has not yet been investigated. Aims To investigate the 1) specific effect of individualized homeopathic Q-potencies compared to placebo and 2) the effect of an extensive homeopathic case taking (case history I) compared to a shorter, rather conventional one (case history II) in the treatment of acute major depression (moderate episode) after six weeks. Methods A randomized, partially double-blind, placebo-controlled, four-armed trial using a 2×2 factorial design with a six-week study duration per patient was performed. Results A total of 44 from 228 planned patients were randomized (2∶1∶2∶1 randomization: 16 homeopathic Q-potencies/case history I, 7 placebo/case history I, 14 homeopathic Q-potencies/case history II, 7 placebo/case history II). Because of recruitment problems, the study was terminated prior to full recruitment, and was underpowered for the preplanned confirmatory hypothesis testing. Exploratory data analyses showed heterogeneous and inconclusive results with large variance in the sample. The mean difference for the Hamilton-D after 6 weeks was 2.0 (95%CI −1.2;5.2) for Q-potencies vs. placebo and −3.1 (−5.9;−0.2) for case history I vs. case history II. Overall, no consistent or clinically relevant results across all outcomes between homeopathic Q-potencies versus placebo and homeopathic versus conventional case taking were observed. The frequency of adverse events was comparable for all groups. Conclusions Although our results are inconclusive, given that recruitment into this trial was very difficult and we had to terminate early, we cannot recommend undertaking a further trial addressing this question in a similar setting. Prof. Dr. Claudia Witt had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Trial registration clinicaltrials.gov identifier NCT01178255. Protocol publication: http://www.trialsjournal.com/content/12/1/43 PMID:24086352
2012-01-01
Background Return to work after gynaecological surgery takes much longer than expected, irrespective of the level of invasiveness. In order to empower patients in recovery and return to work, a multidisciplinary care program consisting of an e-health intervention and integrated care management including participatory workplace intervention was developed. Methods/Design We designed a randomized controlled trial to assess the effect of the multidisciplinary care program on full sustainable return to work in patients after gynaecological surgery, compared to usual clinical care. Two hundred twelve women (18-65 years old) undergoing hysterectomy and/or laparoscopic adnexal surgery on benign indication in one of the 7 participating (university) hospitals in the Netherlands are expected to take part in this study at baseline. The primary outcome measure is sick leave duration until full sustainable return to work and is measured by a monthly calendar of sickness absence during 26 weeks after surgery. Secondary outcome measures are the effect of the care program on general recovery, quality of life, pain intensity and complications, and are assessed using questionnaires at baseline, 2, 6, 12 and 26 weeks after surgery. Discussion The discrepancy between expected physical recovery and actual return to work after gynaecological surgery contributes to the relevance of this study. There is strong evidence that long periods of sick leave can result in work disability, poorer general health and increased risk of mental health problems. We expect that this multidisciplinary care program will improve peri-operative care, contribute to a faster return to work of patients after gynaecological surgery and, as a consequence, will reduce societal costs considerably. Trial registration Netherlands Trial Register (NTR): NTR2087 PMID:22296950
50 CFR 217.222 - Permissible methods of taking.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Section 217.222 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE MARINE MAMMALS REGULATIONS GOVERNING THE TAKE OF MARINE MAMMALS INCIDENTAL TO SPECIFIED ACTIVITIES Taking of Marine Mammals Incidental to the Elliott Bay Seawall Project § 217...
Biological Response to the Dynamic Spectral-Polarized Underwater Light Field
2013-09-30
Z39-18 2 optical suite including underwater video- polarimetry (full Stokes vector video-imaging camera custom-built Cummings; and “SALSA” (Bossa...operations, we couple polarimetry measurements of live, free-swimming animals in their environments with a full suite of optical measurements...Ahmed). We also restrain live, awake animals to take polarimetry measurements (in the field and laboratory) under a complete set of viewing angles and
Optimum take-off angle in the long jump.
Linthorne, Nicholas P; Guzman, Maurice S; Bridgett, Lisa A
2005-07-01
In this study, we found that the optimum take-off angle for a long jumper may be predicted by combining the equation for the range of a projectile in free flight with the measured relations between take-off speed, take-off height and take-off angle for the athlete. The prediction method was evaluated using video measurements of three experienced male long jumpers who performed maximum-effort jumps over a wide range of take-off angles. To produce low take-off angles the athletes used a long and fast run-up, whereas higher take-off angles were produced using a progressively shorter and slower run-up. For all three athletes, the take-off speed decreased and the take-off height increased as the athlete jumped with a higher take-off angle. The calculated optimum take-off angles were in good agreement with the athletes' competition take-off angles.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-01
... by LGL Ltd., Environmental Research Associates (LGL), on behalf of NSF and L-DEO. The NMFS Biological... must set forth the permissible methods of taking, other means of effecting the least practicable... scientific information and estimation methodology. The alternative method of conducting site-specific...
Ownership, Risk-Taking, and Collaboration in an Elementary Language Arts Classroom.
ERIC Educational Resources Information Center
Sturdivant, Cynthia
1992-01-01
A teacher of fourth-, fifth-, and sixth-grade students with deafness in a residential school shares methods and activities found to be effective. The methods stress the importance of expectations for learners, ways that design of the learning environment can encourage student ownership, risk taking, and responsibility. (Author/DB)
Method and System for Producing Full Motion Media to Display on a Spherical Surface
NASA Technical Reports Server (NTRS)
Starobin, Michael A. (Inventor)
2015-01-01
A method and system for producing full motion media for display on a spherical surface is described. The method may include selecting a subject of full motion media for display on a spherical surface. The method may then include capturing the selected subject as full motion media (e.g., full motion video) in a rectilinear domain. The method may then include processing the full motion media in the rectilinear domain for display on a spherical surface, such as by orienting the full motion media, adding rotation to the full motion media, processing edges of the full motion media, and/or distorting the full motion media in the rectilinear domain for instance. After processing the full motion media, the method may additionally include providing the processed full motion media to a spherical projection system, such as a Science on a Sphere system.
Code of Federal Regulations, 2014 CFR
2014-01-01
... criminal fine or civil penalty in full or agrees to terms satisfactory to NOAA for payment: (a) The suspension will not take effect; (b) Any permit suspended under § 904.310 will be reinstated by order of NOAA...
Code of Federal Regulations, 2013 CFR
2013-01-01
... criminal fine or civil penalty in full or agrees to terms satisfactory to NOAA for payment: (a) The suspension will not take effect; (b) Any permit suspended under § 904.310 will be reinstated by order of NOAA...
Code of Federal Regulations, 2012 CFR
2012-01-01
... criminal fine or civil penalty in full or agrees to terms satisfactory to NOAA for payment: (a) The suspension will not take effect; (b) Any permit suspended under § 904.310 will be reinstated by order of NOAA...
Treating Asthma in Children Ages 5 to 11
... a dry powder inhaler. This device requires a deep, rapid inhalation to get the full dose of ... critical part of managing your child's asthma is learning exactly what steps to take on a daily, ...
Lomustine comes as a capsule to take by mouth. It is usually taken once every 6 weeks on an empty stomach. Your full dose may ... two or more different types and colors of capsules. You will receive only enough capsules for one ...
Fenoprofen comes as a capsule and a tablet to take by mouth. It is usually taken with a full glass of water three or four ... or any of the inactive ingredients in fenoprofen capsules or tablets. Ask your pharmacist for a list ...
Some Ethical-Moral Concerns in Administration.
ERIC Educational Resources Information Center
Enns, Frederick
1981-01-01
Presents and analyzes moral-ethical issues that arise in administration and concludes that past descriptive, objective, and scientific approaches to administration have failed to take full account of the moral-ethical dimension of human existence. (Author/WD)
The ATLAS Tier-0: Overview and operational experience
NASA Astrophysics Data System (ADS)
Elsing, Markus; Goossens, Luc; Nairz, Armin; Negri, Guido
2010-04-01
Within the ATLAS hierarchical, multi-tier computing infrastructure, the Tier-0 centre at CERN is mainly responsible for prompt processing of the raw data coming from the online DAQ system, to archive the raw and derived data on tape, to register the data with the relevant catalogues and to distribute them to the associated Tier-1 centers. The Tier-0 is already fully functional. It has been successfully participating in all cosmic and commissioning data taking since May 2007, and was ramped up to its foreseen full size, performance and throughput for the cosmic (and short single-beam) run periods between July and October 2008. Data and work flows for collision data taking were exercised in several "Full Dress Rehearsals" (FDRs) in the course of 2008. The transition from an expert to a shifter-based system was successfully established in July 2008. This article will give an overview of the Tier-0 system, its data and work flows, and operations model. It will review the operational experience gained in cosmic, commissioning, and FDR exercises during the past year. And it will give an outlook on planned developments and the evolution of the system towards first collision data taking expected now in late Autumn 2009.
Jahani, Sahar; Setarehdan, Seyed K; Boas, David A; Yücel, Meryem A
2018-01-01
Motion artifact contamination in near-infrared spectroscopy (NIRS) data has become an important challenge in realizing the full potential of NIRS for real-life applications. Various motion correction algorithms have been used to alleviate the effect of motion artifacts on the estimation of the hemodynamic response function. While smoothing methods, such as wavelet filtering, are excellent in removing motion-induced sharp spikes, the baseline shifts in the signal remain after this type of filtering. Methods, such as spline interpolation, on the other hand, can properly correct baseline shifts; however, they leave residual high-frequency spikes. We propose a hybrid method that takes advantage of different correction algorithms. This method first identifies the baseline shifts and corrects them using a spline interpolation method or targeted principal component analysis. The remaining spikes, on the other hand, are corrected by smoothing methods: Savitzky-Golay (SG) filtering or robust locally weighted regression and smoothing. We have compared our new approach with the existing correction algorithms in terms of hemodynamic response function estimation using the following metrics: mean-squared error, peak-to-peak error ([Formula: see text]), Pearson's correlation ([Formula: see text]), and the area under the receiver operator characteristic curve. We found that spline-SG hybrid method provides reasonable improvements in all these metrics with a relatively short computational time. The dataset and the code used in this study are made available online for the use of all interested researchers.
Color image definition evaluation method based on deep learning method
NASA Astrophysics Data System (ADS)
Liu, Di; Li, YingChun
2018-01-01
In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.
NASA Astrophysics Data System (ADS)
Eisenbeis, J.; Roy, C.; Bland, E. C.; Occhipinti, G.
2017-12-01
Most recent methods in ionospheric tomography are based on the inversion of the total electron content measured by ground-based GPS receivers. As a consequence of the high frequency of the GPS signal and the absence of horizontal raypaths, the electron density structure is mainly reconstructed in the F2 region (300 km), where the ionosphere reaches the maximum of ionization, and is not sensitive to the lower ionospheric structure. We propose here a new tomographic method of the lower ionosphere (Roy et al., 2014), based on the full inversion of over-the-horizon (OTH) radar data and applicable to SuperDarn data. The major advantage of our methodology is taking into account, numerically and jointly, the effect that the electron density perturbations induce not only in the speed of electromagnetic waves but also on the raypath geometry. This last point is extremely critical for OTH/SuperDarn data inversions as the emitted signal propagates through the ionosphere between a fixed starting point (the radar) and an unknown end point on the Earth surface where the signal is backscattered. We detail our ionospheric tomography method with the aid of benchmark tests in order to highlight the sensitivity of the radar related to the explored observational parameters: frequencies, elevations, azimuths. Having proved the necessity to take into account both effects simultaneously, we apply our method to real backscattered data from Super Darn and OTH radar. The preliminary solution obtained with the Hokkaido East SuperDARN with only two frequencies (10MHz and 11MHz), showed here, is stable and push us to deeply explore a more complete dataset that we will present at the AGU 2017. This is, in our knowledge, the first time that an ionospheric tomography has been estimated with SuperDarn backscattered data. Reference: Roy, C., G. Occhipinti, L. Boschi, J.-P. Moliné, and M. Wieczorek (2014), Effect of ray and speed perturbations on ionospheric tomography by over-the-horizon radar: A new method, J. Geophys. Res. Space Physics, 119, doi:10.1002/2014JA020137.
50 CFR 216.213 - Permissible methods of taking.
Code of Federal Regulations, 2010 CFR
2010-10-01
... ADMINISTRATION, DEPARTMENT OF COMMERCE MARINE MAMMALS REGULATIONS GOVERNING THE TAKING AND IMPORTING OF MARINE MAMMALS Taking of Marine Mammals Incidental to Explosive Severance Activities Conducted During Offshore Structure Removal Operations on the Outer Continental Shelf in the U.S. Gulf of Mexico § 216.213 Permissible...
An External Wire Frame Fixation Method of Skin Grafting for Burn Reconstruction.
Yoshino, Yukiko; Ueda, Hyakuzoh; Ono, Simpei; Ogawa, Rei
2017-06-28
The skin graft is a prevalent reconstructive method for burn injuries. We have been applying external wire frame fixation methods in combination with skin grafts since 1986 and have experienced better outcomes in percentage of successful graft take. The overall purpose of this method was to further secure skin graft adherence to wound beds in hard to stabilize areas. There are also location-specific benefits to this technique such as eliminating the need of tarsorrhaphy in periorbital area, allowing immediate food intake after surgery in perioral area, and performing less invasive fixing methods in digits, and so on. The purpose of this study was to clarify its benefits and applicable locations. We reviewed 22 postburn patients with skin graft reconstructions using the external wire frame method at our institution from December 2012 through September 2016. Details of the surgical technique and individual reports are also discussed. Of the 22 cases, 15 (68%) were split-thickness skin grafts and 7 (32%) were full-thickness skin grafts. Five cases (23%) involved periorbital reconstruction, 5 (23%) involved perioral reconstruction, 2 (9%) involved lower limb reconstruction, and 10 (45%) involved digital reconstruction. Complete (100%) survival of the skin graft was attained in all cases. No signs of complication were observed. With 30 years of experiences all combined, we have summarized fail-proof recommendations to a successful graft survival with an emphasis on the locations of its application.
Comparison of analysis methods for airway quantification
NASA Astrophysics Data System (ADS)
Odry, Benjamin L.; Kiraly, Atilla P.; Novak, Carol L.; Naidich, David P.
2012-03-01
Diseased airways have been known for several years as a possible contributing factor to airflow limitation in Chronic Obstructive Pulmonary Diseases (COPD). Quantification of disease severity through the evaluation of airway dimensions - wall thickness and lumen diameter - has gained increased attention, thanks to the availability of multi-slice computed tomography (CT). Novel approaches have focused on automated methods of measurement as a faster and more objective means that the visual assessment routinely employed in the clinic. Since the Full-Width Half-Maximum (FWHM) method of airway measurement was introduced two decades ago [1], several new techniques for quantifying airways have been detailed in the literature, but no approach has truly become a standard for such analysis. Our own research group has presented two alternative approaches for determining airway dimensions, one involving a minimum path and the other active contours [2, 3]. With an increasing number of techniques dedicated to the same goal, we decided to take a step back and analyze the differences of these methods. We consequently put to the test our two methods of analysis and the FWHM approach. We first measured a set of 5 airways from a phantom of known dimensions. Then we compared measurements from the three methods to those of two independent readers, performed on 35 airways in 5 patients. We elaborate on the differences of each approach and suggest conclusions on which could be defined as the best one.
Method for taking into account hard-photon emission in four-fermion processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aleksejevs, A. G., E-mail: aaleksejevs@swgc.mun.ca; Barkanova, S. G., E-mail: svetlana.barkanova@acadiau.ca; Zykunov, V. A., E-mail: vladimir.zykunov@cern.ch
2016-01-15
A method for taking into account hard-photon emission in four-fermion processes proceeding in the s channel is described. The application of this method is exemplified by numerically estimating one-loop electroweak corrections to observables (cross sections and asymmetries) of the reaction e{sup −}e{sup +} → μ{sup −}μ{sup +}(γ) involving longitudinally polarized electrons and proceeding at energies below the Z-resonance energy.
On a more rigorous gravity field processing for future LL-SST type gravity satellite missions
NASA Astrophysics Data System (ADS)
Daras, I.; Pail, R.; Murböck, M.
2013-12-01
In order to meet the augmenting demands of the user community concerning accuracies of temporal gravity field models, future gravity missions of low-low satellite-to-satellite tracking (LL-SST) type are planned to carry more precise sensors than their precedents. A breakthrough is planned with the improved LL-SST measurement link, where the traditional K-band microwave instrument of 1μm accuracy will be complemented by an inter-satellite ranging instrument of several nm accuracy. This study focuses on investigations concerning the potential performance of the new sensors and their impact in gravity field solutions. The processing methods for gravity field recovery have to meet the new sensor standards and be able to take full advantage of the new accuracies that they provide. We use full-scale simulations in a realistic environment to investigate whether the standard processing techniques suffice to fully exploit the new sensors standards. We achieve that by performing full numerical closed-loop simulations based on the Integral Equation approach. In our simulation scheme, we simulate dynamic orbits in a conventional tracking analysis to compute pseudo inter-satellite ranges or range-rates that serve as observables. Each part of the processing is validated separately with special emphasis on numerical errors and their impact in gravity field solutions. We demonstrate that processing with standard precision may be a limiting factor for taking full advantage of new generation sensors that future satellite missions will carry. Therefore we have created versions of our simulator with enhanced processing precision with primarily aim to minimize round-off system errors. Results using the enhanced precision show a big reduction of system errors that were present at the standard precision processing even for the error-free scenario, and reveal the improvements the new sensors will bring into the gravity field solutions. As a next step, we analyze the contribution of individual error sources to the system's error budget. More specifically we analyze sensor noise from the laser interferometer and the accelerometers, errors in the kinematic orbits and the background fields as well as temporal and spatial aliasing errors. We give special care on the assessment of error sources with stochastic behavior, such as the laser interferometer and the accelerometers, and their consistent stochastic modeling in frame of the adjustment process.
Teaching Sexual History-Taking Skills Using the Sexual Events Classification System
ERIC Educational Resources Information Center
Fidler, Donald C.; Petri, Justin Daniel; Chapman, Mark
2010-01-01
Objective: The authors review the literature about educational programs for teaching sexual history-taking skills and describe novel techniques for teaching these skills. Methods: Psychiatric residents enrolled in a brief sexual history-taking course that included instruction on the Sexual Events Classification System, feedback on residents'…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-25
... Marine Mammals Incidental to Specified Activities; Seabird and Pinniped Research Activities in Central... forth the permissible methods of taking, other means of effecting the least practicable adverse impact... the taking by harassment, of marine mammals incidental to conducting seabird and pinniped research...
The Association of Childhood Personality on Sexual Risk Taking during Adolescence
ERIC Educational Resources Information Center
Atkins, Robert
2008-01-01
Background: Sexual risk taking during adolescence such as failure to use contraception or condoms is associated with premature parenthood and high rates of sexually transmitted infection. The relation of childhood personality to sexual risk taking during adolescence has been largely unexplored. Methods: Using data collected from participants in…
How to use an inhaler - with spacer
... MDIs) usually have 3 parts: A mouthpiece A cap that goes over the mouthpiece A canister full ... Take the cap off the inhaler and spacer. Shake the inhaler hard. Attach the spacer to the inhaler. If you have ...
2018-01-31
California's NASA Armstrong Flight Research Center photographer Carla Thomas takes photos on January 31 of the rare opportunity to capture a supermoon, a blue moon and a lunar eclipse at the same time. A supermoon occurs when the Moon is closer to Earth in its orbit and appearing 14 percent brighter than usual. As the second full moon of the month, this moon is also commonly known as a blue moon, though it will not be blue in appearance. The super blue moon will pass through Earth's shadow and take on a reddish tint, known as a blood moon. This total lunar eclipse occurs when the Sun, Earth, and a full moon form a near-perfect lineup in space. The Moon passes directly behind the Earth into its umbra (shadow).
Shot-noise limited throughput of soft x-ray ptychography for nanometrology applications
NASA Astrophysics Data System (ADS)
Koek, Wouter; Florijn, Bastiaan; Bäumer, Stefan; Kruidhof, Rik; Sadeghian, Hamed
2018-03-01
Due to its potential for high resolution and three-dimensional imaging, soft x-ray ptychography has received interest for nanometrology applications. We have analyzed the measurement time per unit area when using soft x-ray ptychography for various nanometrology applications including mask inspection and wafer inspection, and are thus able to predict (order of magnitude) throughput figures. Here we show that for a typical measurement system, using a typical sampling strategy, and when aiming for 10-15 nm resolution, it is expected that a wafer-based topology (2.5D) measurement takes approximately 4 minutes per μm2 , and a full three-dimensional measurement takes roughly 6 hours per μm2 . Due to their much higher reflectivity EUV masks can be measured considerably faster; a measurement speed of 0.1 seconds per μm2 is expected. However, such speeds do not allow for full wafer or mask inspection at industrially relevant throughput.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-15
... of the species or stock(s) for subsistence uses (where relevant), and if the permissible methods of... information regarding the purpose of the research is contained in the Notice of Proposed IHA (77 FR 50990... sampling methods to monitor rocky intertidal algal and invertebrate species abundances (see Figure 2 in...
Hybrid Weighted Minimum Norm Method A new method based LORETA to solve EEG inverse problem.
Song, C; Zhuang, T; Wu, Q
2005-01-01
This Paper brings forward a new method to solve EEG inverse problem. Based on following physiological characteristic of neural electrical activity source: first, the neighboring neurons are prone to active synchronously; second, the distribution of source space is sparse; third, the active intensity of the sources are high centralized, we take these prior knowledge as prerequisite condition to develop the inverse solution of EEG, and not assume other characteristic of inverse solution to realize the most commonly 3D EEG reconstruction map. The proposed algorithm takes advantage of LORETA's low resolution method which emphasizes particularly on 'localization' and FOCUSS's high resolution method which emphasizes particularly on 'separability'. The method is still under the frame of the weighted minimum norm method. The keystone is to construct a weighted matrix which takes reference from the existing smoothness operator, competition mechanism and study algorithm. The basic processing is to obtain an initial solution's estimation firstly, then construct a new estimation using the initial solution's information, repeat this process until the solutions under last two estimate processing is keeping unchanged.
A deep learning-based multi-model ensemble method for cancer prediction.
Xiao, Yawen; Wu, Jun; Lin, Zongli; Zhao, Xiaodong
2018-01-01
Cancer is a complex worldwide health problem associated with high mortality. With the rapid development of the high-throughput sequencing technology and the application of various machine learning methods that have emerged in recent years, progress in cancer prediction has been increasingly made based on gene expression, providing insight into effective and accurate treatment decision making. Thus, developing machine learning methods, which can successfully distinguish cancer patients from healthy persons, is of great current interest. However, among the classification methods applied to cancer prediction so far, no one method outperforms all the others. In this paper, we demonstrate a new strategy, which applies deep learning to an ensemble approach that incorporates multiple different machine learning models. We supply informative gene data selected by differential gene expression analysis to five different classification models. Then, a deep learning method is employed to ensemble the outputs of the five classifiers. The proposed deep learning-based multi-model ensemble method was tested on three public RNA-seq data sets of three kinds of cancers, Lung Adenocarcinoma, Stomach Adenocarcinoma and Breast Invasive Carcinoma. The test results indicate that it increases the prediction accuracy of cancer for all the tested RNA-seq data sets as compared to using a single classifier or the majority voting algorithm. By taking full advantage of different classifiers, the proposed deep learning-based multi-model ensemble method is shown to be accurate and effective for cancer prediction. Copyright © 2017 Elsevier B.V. All rights reserved.
An Intuitionistic Multiplicative ORESTE Method for Patients’ Prioritization of Hospitalization
Zhang, Cheng; Wu, Xingli; Wu, Di; Luo, Li; Herrera-Viedma, Enrique
2018-01-01
The tension brought about by sickbeds is a common and intractable issue in public hospitals in China due to the large population. Assigning the order of hospitalization of patients is difficult because of complex patient information such as disease type, emergency degree, and severity. It is critical to rank the patients taking full account of various factors. However, most of the evaluation criteria for hospitalization are qualitative, and the classical ranking method cannot derive the detailed relations between patients based on these criteria. Motivated by this, a comprehensive multiple criteria decision making method named the intuitionistic multiplicative ORESTE (organísation, rangement et Synthèse dedonnées relarionnelles, in French) was proposed to handle the problem. The subjective and objective weights of criteria were considered in the proposed method. To do so, first, considering the vagueness of human perceptions towards the alternatives, an intuitionistic multiplicative preference relation model is applied to represent the experts’ preferences over the pairwise alternatives with respect to the predetermined criteria. Then, a correlation coefficient-based weight determining method is developed to derive the objective weights of criteria. This method can overcome the biased results caused by highly-related criteria. Afterwards, we improved the general ranking method, ORESTE, by introducing a new score function which considers both the subjective and objective weights of criteria. An intuitionistic multiplicative ORESTE method was then developed and further highlighted by a case study concerning the patients’ prioritization. PMID:29673212
Chung, Paul J.; Elliott, Marc N.; Garfield, Craig F.; Vestal, Katherine D.; Klein, David J.
2009-01-01
Objectives. We examined the perceived effects of leave from work among employed parents of children with special health care needs. Methods. Telephone interviews were conducted from November 2003 to January 2004 with 585 parents who had missed 1 or more workdays for their child's illness in the previous year. Results. Most parents reported positive effects of leave on their child's physical (81%) and emotional (85%) health; 57% reported a positive effect on their own emotional health, although 24% reported a negative effect. Most parents reported no effect (44%) or a negative effect (42%) on job performance; 73% reported leave-related financial problems. In multivariate analyses, parents receiving full pay during leave were more likely than were parents receiving no pay to report positive effects on child physical (odds ratio [OR] = 1.85) and emotional (OR = 1.68) health and parent emotional health (OR = 1.70), and were less likely to report financial problems (OR = 0.20). Conclusions. Employed parents believed that leave-taking benefited the health of their children with special health care needs and their own emotional health, but compromised their job performance and finances. Parents who received full pay reported better consequences across the board. Access to paid leave, particularly with full pay, may improve parent and child outcomes. PMID:19150905
Kafkas, Şenay; Kim, Jee-Hyub; Pi, Xingjun; McEntyre, Johanna R
2015-01-01
In this study, we present an analysis of data citation practices in full text research articles and their corresponding supplementary data files, made available in the Open Access set of articles from Europe PubMed Central. Our aim is to investigate whether supplementary data files should be considered as a source of information for integrating the literature with biomolecular databases. Using text-mining methods to identify and extract a variety of core biological database accession numbers, we found that the supplemental data files contain many more database citations than the body of the article, and that those citations often take the form of a relatively small number of articles citing large collections of accession numbers in text-based files. Moreover, citation of value-added databases derived from submission databases (such as Pfam, UniProt or Ensembl) is common, demonstrating the reuse of these resources as datasets in themselves. All the database accession numbers extracted from the supplementary data are publicly accessible from http://dx.doi.org/10.5281/zenodo.11771. Our study suggests that supplementary data should be considered when linking articles with data, in curation pipelines, and in information retrieval tasks in order to make full use of the entire research article. These observations highlight the need to improve the management of supplemental data in general, in order to make this information more discoverable and useful.
Han, Hyun Ho; Jun, Daiwon; Moon, Suk-Ho; Kang, In Sook; Kim, Min Cheol
2016-01-01
For skin defects caused by full-thickness burns, trauma, or tumor tissue excision, skin grafting is one of the most convenient and useful treatment methods. In this situation, graft fixation is important in skin grafting. This study was performed to compare the effectiveness of skin graft fixation between high-concentration fibrin sealant and sutures. There have been numerous studies using fibrin sealant for graft fixation, but they utilized slow-clotting fibrin sealant containing less than 10 IU/mL thrombin. Twenty-five patients underwent split-thickness skin grafting using fast-clotting fibrin sealant containing 400 IU/mL thrombin, while 30 patients underwent grafting using sutures. Rates of hematoma/seroma formation, graft dislocation, graft necrosis, and graft take were investigated postoperatively. The graft surface area was calculated using Image J software (National Institutes of Health, Bethesda, MD, USA). After 5 days, rates of hematoma/seroma formation and graft dislocation were 7.84 and 1.29% in group I, and 9.55 and 1.45% in group II, respectively. After 30 days, rates of graft necrosis and graft take were 1.86 and 98.14% in group I, and 4.65 and 95.35% in group II. Undiluted fibrin sealant showed significantly superior results for all rates ( p < 0.05) except graft dislocation. When high-concentration fast-clotting fibrin sealant was applied to skin grafts without dilution, no difficulty was experienced during surgery. Sealant showed superior results compared with sutures and had an excellent graft take rate. II.
Henderson, Emily
2015-04-01
Obesity is a top-priority global health issue; however, a clear way to address obesity in primary care is not yet in view. To conduct a meta-ethnography of patient and primary care practitioner perspectives of roles and responsibilities in how to address obesity in the UK, to inform evidence-based services that are acceptable to, and appropriate for, patients and practitioners. Qualitative synthesis applying meta-ethnographic methods according to the Noblit and Hare monograph. Database searches in MEDLINE(®), Social Sciences Citation Index(®), CINAHL, and Health Management Information Consortium were limited to 1997-2012 to examine recent perspectives. Full articles of practitioner and/or patient perspectives on obesity services in primary care were reviewed, and included semi-structured or unstructured interviews and focus groups, and participant observations. Nine studies were synthesised with perspectives from patients (n = 105) and practitioners (n = 144). Practitioners believe that patients are responsible for obesity, and that primary care should not help, or is poorly equipped to do so. Patients 'take responsibility' by 'blaming' themselves, but feel that practitioners should demonstrate more leadership. The empowerment of patients to access health services is reliant on the empowerment of practitioners to take an unambiguous position. Primary care has the potential either to perpetuate or counter obesity-related stigma. There needs to be a firm decision as to what role primary care will take in the prevention and treatment of obesity. To remain ambiguous runs the risk of losing patients' confidence and adding to a growing sense of futility. © British Journal of General Practice 2015.
Neugebauer, E A M; Wilkinson, R C; Kehlet, H; Schug, S A
2007-07-01
Many patients still suffer severe acute pain in the postoperative period. Although guidelines for treating acute pain are widely published and promoted, most do not consider procedure-specific differences in pain experienced or in techniques that may be most effective and appropriate for different surgical settings. The procedure-specific postoperative pain management (PROSPECT) Working Group provides procedure-specific recommendations for postoperative pain management together with supporting evidence from systematic literature reviews and related procedures at http://www.postoppain.org The methodology for PROSPECT reviews was developed and refined by discussion of the Working Group, and it adapts existing methods for formulation of consensus recommendations to the specific requirements of PROSPECT. To formulate PROSPECT recommendations, we use a methodology that takes into account study quality and source and level of evidence, and we use recognized methods for achieving group consensus, thus reducing potential bias. The new methodology is first applied in full for the 2006 update of the PROSPECT review of postoperative pain management for laparoscopic cholecystectomy. Transparency in PROSPECT processes allows the users to be fully aware of any limitations of the evidence and recommendations, thereby allowing for appropriate decisions in their own practice setting.
2D photonic crystal complete band gap search using a cyclic cellular automaton refination
NASA Astrophysics Data System (ADS)
González-García, R.; Castañón, G.; Hernández-Figueroa, H. E.
2014-11-01
We present a refination method based on a cyclic cellular automaton (CCA) that simulates a crystallization-like process, aided with a heuristic evolutionary method called differential evolution (DE) used to perform an ordered search of full photonic band gaps (FPBGs) in a 2D photonic crystal (PC). The solution is proposed as a combinatorial optimization of the elements in a binary array. These elements represent the existence or absence of a dielectric material surrounded by air, thus representing a general geometry whose search space is defined by the number of elements in such array. A block-iterative frequency-domain method was used to compute the FPBGs on a PC, when present. DE has proved to be useful in combinatorial problems and we also present an implementation feature that takes advantage of the periodic nature of PCs to enhance the convergence of this algorithm. Finally, we used this methodology to find a PC structure with a 19% bandgap-to-midgap ratio without requiring previous information of suboptimal configurations and we made a statistical study of how it is affected by disorder in the borders of the structure compared with a previous work that uses a genetic algorithm.
Formisano, Elia; De Martino, Federico; Valente, Giancarlo
2008-09-01
Machine learning and pattern recognition techniques are being increasingly employed in functional magnetic resonance imaging (fMRI) data analysis. By taking into account the full spatial pattern of brain activity measured simultaneously at many locations, these methods allow detecting subtle, non-strictly localized effects that may remain invisible to the conventional analysis with univariate statistical methods. In typical fMRI applications, pattern recognition algorithms "learn" a functional relationship between brain response patterns and a perceptual, cognitive or behavioral state of a subject expressed in terms of a label, which may assume discrete (classification) or continuous (regression) values. This learned functional relationship is then used to predict the unseen labels from a new data set ("brain reading"). In this article, we describe the mathematical foundations of machine learning applications in fMRI. We focus on two methods, support vector machines and relevance vector machines, which are respectively suited for the classification and regression of fMRI patterns. Furthermore, by means of several examples and applications, we illustrate and discuss the methodological challenges of using machine learning algorithms in the context of fMRI data analysis.
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
NASA Technical Reports Server (NTRS)
Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen
2016-01-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.
Simulation based optimized beam velocity in additive manufacturing
NASA Astrophysics Data System (ADS)
Vignat, Frédéric; Béraud, Nicolas; Villeneuve, François
2017-08-01
Manufacturing good parts with additive technologies rely on melt pool dimension and temperature and are controlled by manufacturing strategies often decided on machine side. Strategies are built on beam path and variable energy input. Beam path are often a mix of contour and hatching strategies filling the contours at each slice. Energy input depend on beam intensity and speed and is determined from simple thermal models to control melt pool dimensions and temperature and ensure porosity free material. These models take into account variation in thermal environment such as overhanging surfaces or back and forth hatching path. However not all the situations are correctly handled and precision is limited. This paper proposes new method to determine energy input from full built chamber 3D thermal simulation. Using the results of the simulation, energy is modified to keep melt pool temperature in a predetermined range. The paper present first an experimental method to determine the optimal range of temperature. In a second part the method to optimize the beam speed from the simulation results is presented. Finally, the optimized beam path is tested in the EBM machine and built part are compared with part built with ordinary beam path.
NASA Astrophysics Data System (ADS)
Nataf, Pierre; Mila, Frédéric
2018-04-01
We develop an efficient method to perform density matrix renormalization group simulations of the SU(N ) Heisenberg chain with open boundary conditions taking full advantage of the SU(N ) symmetry of the problem. This method is an extension of the method previously developed for exact diagonalizations and relies on a systematic use of the basis of standard Young tableaux. Concentrating on the model with the fundamental representation at each site (i.e., one particle per site in the fermionic formulation), we have benchmarked our results for the ground-state energy up to N =8 and up to 420 sites by comparing them with Bethe ansatz results on open chains, for which we have derived and solved the Bethe ansatz equations. The agreement for the ground-state energy is excellent for SU(3) (12 digits). It decreases with N , but it is still satisfactory for N =8 (six digits). Central charges c are also extracted from the entanglement entropy using the Calabrese-Cardy formula and agree with the theoretical values expected from the SU (N) 1 Wess-Zumino-Witten conformal field theories.
Determination of vessel cross-sectional area by thresholding in Radon space
Gao, Yu-Rong; Drew, Patrick J
2014-01-01
The cross-sectional area of a blood vessel determines its resistance, and thus is a regulator of local blood flow. However, the cross-sections of penetrating vessels in the cortex can be non-circular, and dilation and constriction can change the shape of the vessels. We show that observed vessel shape changes can introduce large errors in flux calculations when using a single diameter measurement. Because of these shape changes, typical diameter measurement approaches, such as the full-width at half-maximum (FWHM) that depend on a single diameter axis will generate erroneous results, especially when calculating flux. Here, we present an automated method—thresholding in Radon space (TiRS)—for determining the cross-sectional area of a convex object, such as a penetrating vessel observed with two-photon laser scanning microscopy (2PLSM). The thresholded image is transformed back to image space and contiguous pixels are segmented. The TiRS method is analogous to taking the FWHM across multiple axes and is more robust to noise and shape changes than FWHM and thresholding methods. We demonstrate the superior precision of the TiRS method with in vivo 2PLSM measurements of vessel diameter. PMID:24736890
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
NASA Astrophysics Data System (ADS)
Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.
2016-12-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.
Statistical segmentation of multidimensional brain datasets
NASA Astrophysics Data System (ADS)
Desco, Manuel; Gispert, Juan D.; Reig, Santiago; Santos, Andres; Pascau, Javier; Malpica, Norberto; Garcia-Barreno, Pedro
2001-07-01
This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.
Design and Manufacture of Structurally Efficient Tapered Struts
NASA Technical Reports Server (NTRS)
Brewster, Jebediah W.
2009-01-01
Composite materials offer the potential of weight savings for numerous spacecraft and aircraft applications. A composite strut is just one integral part of the node-to-node system and the optimization of the shut and node assembly is needed to take full advantage of the benefit of composites materials. Lockheed Martin designed and manufactured a very light weight one piece composite tapered strut that is fully representative of a full scale flight article. In addition, the team designed and built a prototype of the node and end fitting system that will effectively integrate and work with the full scale flight articles.
Correlation Tests of the Ditching Behavior of an Army B-24D Airplane and a 1/16-size Model
NASA Technical Reports Server (NTRS)
Jarvis, George A.; Fisher, Lloyd J.
1946-01-01
Behaviors of both model and full-scale airplanes were ascertained by making visual observations, by recording time histories of decelerations, and by taking motion picture records of ditchings. Results are presented in form of sequence photographs and time-history curves for attitudes, vertical and horizontal displacements, and longitudinal decelerations. Time-history curves for attitudes and horizontal and vertical displacements for model and full-scale tests were in agreement; maximum longitudinal decelerations for both ditchings did not occur at same part of run; full-scale maximum deceleration was 50 percent greater.
A Global Optimization Method to Calculate Water Retention Curves
NASA Astrophysics Data System (ADS)
Maggi, S.; Caputo, M. C.; Turturro, A. C.
2013-12-01
Water retention curves (WRC) have a key role for the hydraulic characterization of soils and rocks. The behaviour of the medium is defined by relating the unsaturated water content to the matric potential. The experimental determination of WRCs requires an accurate and detailed measurement of the dependence of matric potential on water content, a time-consuming and error-prone process, in particular for rocky media. A complete experimental WRC needs at least a few tens of data points, distributed more or less uniformly from full saturation to oven dryness. Since each measurement requires to wait to reach steady state conditions (i.e., between a few tens of minutes for soils and up to several hours or days for rocks or clays), the whole process can even take a few months. The experimental data are fitted to the most appropriate parametric model, such as the widely used models of Van Genuchten, Brooks and Corey and Rossi-Nimmo, to obtain the analytic WRC. We present here a new method for the determination of the parameters that best fit the models to the available experimental data. The method is based on differential evolution, an evolutionary computation algorithm particularly useful for multidimensional real-valued global optimization problems. With this method it is possible to strongly reduce the number of measurements necessary to optimize the model parameters that accurately describe the WRC of the samples, allowing to decrease the time needed to adequately characterize the medium. In the present work, we have applied our method to calculate the WRCs of sedimentary carbonatic rocks of marine origin, belonging to 'Calcarenite di Gravina' Formation (Middle Pliocene - Early Pleistocene) and coming from two different quarry districts in Southern Italy. WRC curves calculated using the Van Genuchten model by simulated annealing (dashed curve) and differential evolution (solid curve). The curves are calculated using 10 experimental data points randomly extracted from the full experimental dataset. Simulated annealing is not able to find the optimal solution with this reduced data set.
Uehara, Shota; Tanaka, Shigenori
2017-04-24
Protein flexibility is a major hurdle in current structure-based virtual screening (VS). In spite of the recent advances in high-performance computing, protein-ligand docking methods still demand tremendous computational cost to take into account the full degree of protein flexibility. In this context, ensemble docking has proven its utility and efficiency for VS studies, but it still needs a rational and efficient method to select and/or generate multiple protein conformations. Molecular dynamics (MD) simulations are useful to produce distinct protein conformations without abundant experimental structures. In this study, we present a novel strategy that makes use of cosolvent-based molecular dynamics (CMD) simulations for ensemble docking. By mixing small organic molecules into a solvent, CMD can stimulate dynamic protein motions and induce partial conformational changes of binding pocket residues appropriate for the binding of diverse ligands. The present method has been applied to six diverse target proteins and assessed by VS experiments using many actives and decoys of DEKOIS 2.0. The simulation results have revealed that the CMD is beneficial for ensemble docking. Utilizing cosolvent simulation allows the generation of druggable protein conformations, improving the VS performance compared with the use of a single experimental structure or ensemble docking by standard MD with pure water as the solvent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ravindra, M.K.; Banon, H.
1992-07-01
In this report, the scoping quantification procedures for external events in probabilistic risk assessments of nuclear power plants are described. External event analysis in a PRA has three important goals; (1) the analysis should be complete in that all events are considered; (2) by following some selected screening criteria, the more significant events are identified for detailed analysis; (3) the selected events are analyzed in depth by taking into account the unique features of the events: hazard, fragility of structures and equipment, external-event initiated accident sequences, etc. Based on the above goals, external event analysis may be considered as amore » three-stage process: Stage I: Identification and Initial Screening of External Events; Stage II: Bounding Analysis; Stage III: Detailed Risk Analysis. In the present report, first, a review of published PRAs is given to focus on the significance and treatment of external events in full-scope PRAs. Except for seismic, flooding, fire, and extreme wind events, the contributions of other external events to plant risk have been found to be negligible. Second, scoping methods for external events not covered in detail in the NRC's PRA Procedures Guide are provided. For this purpose, bounding analyses for transportation accidents, extreme winds and tornadoes, aircraft impacts, turbine missiles, and chemical release are described.« less
Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun
2017-01-01
This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank-(L1,L2,·) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method. PMID:28448431
Wang, Qiang; Liu, Yuefei; Chen, Yiqiang; Ma, Jing; Tan, Liying; Yu, Siyuan
2017-03-01
Accurate location computation for a beacon is an important factor of the reliability of satellite optical communications. However, location precision is generally limited by the resolution of CCD. How to improve the location precision of a beacon is an important and urgent issue. In this paper, we present two precise centroid computation methods for locating a beacon in satellite optical communications. First, in terms of its characteristics, the beacon is divided into several parts according to the gray gradients. Afterward, different numbers of interpolation points and different interpolation methods are applied in the interpolation area; we calculate the centroid position after interpolation and choose the best strategy according to the algorithm. The method is called a "gradient segmentation interpolation approach," or simply, a GSI (gradient segmentation interpolation) algorithm. To take full advantage of the pixels of the beacon's central portion, we also present an improved segmentation square weighting (SSW) algorithm, whose effectiveness is verified by the simulation experiment. Finally, an experiment is established to verify GSI and SSW algorithms. The results indicate that GSI and SSW algorithms can improve locating accuracy over that calculated by a traditional gray centroid method. These approaches help to greatly improve the location precision for a beacon in satellite optical communications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brun, E., E-mail: emmanuel.brun@esrf.fr; Grandl, S.; Sztrókay-Gaul, A.
Purpose: Phase contrast computed tomography has emerged as an imaging method, which is able to outperform present day clinical mammography in breast tumor visualization while maintaining an equivalent average dose. To this day, no segmentation technique takes into account the specificity of the phase contrast signal. In this study, the authors propose a new mathematical framework for human-guided breast tumor segmentation. This method has been applied to high-resolution images of excised human organs, each of several gigabytes. Methods: The authors present a segmentation procedure based on the viscous watershed transform and demonstrate the efficacy of this method on analyzer basedmore » phase contrast images. The segmentation of tumors inside two full human breasts is then shown as an example of this procedure’s possible applications. Results: A correct and precise identification of the tumor boundaries was obtained and confirmed by manual contouring performed independently by four experienced radiologists. Conclusions: The authors demonstrate that applying the watershed viscous transform allows them to perform the segmentation of tumors in high-resolution x-ray analyzer based phase contrast breast computed tomography images. Combining the additional information provided by the segmentation procedure with the already high definition of morphological details and tissue boundaries offered by phase contrast imaging techniques, will represent a valuable multistep procedure to be used in future medical diagnostic applications.« less
Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun
2017-04-27
This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · ) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method.
Hutchins, Robert; Pignone, Michael P; Sheridan, Stacey L; Viera, Anthony J
2015-01-01
Objectives The utility value attributed to taking pills for prevention can have a major effect on the cost-effectiveness of interventions, but few published studies have systematically quantified this value. We sought to quantify the utility value of taking pills used for prevention of cardiovascular disease (CVD). Design Cross-sectional survey. Setting Central North Carolina. Participants 708 healthcare employees aged 18 years and older. Primary and secondary outcomes Utility values for taking 1 pill/day, assessed using time trade-off, modified standard gamble and willingness-to-pay methods. Results Mean age of respondents was 43 years (19–74). The majority of the respondents were female (83%) and Caucasian (80%). Most (80%) took at least 2 pills/day. Mean utility values for taking 1 pill/day using the time trade-off method were: 0.9972 (95% CI 0.9962 to 0.9980). Values derived from the standard gamble and willingness-to-pay methods were 0.9967 (0.9954 to 0.9979) and 0.9989 (95% CI 0.9986 to 0.9991), respectively. Utility values varied little across characteristics such as age, sex, race, education level or number of pills taken per day. Conclusions The utility value of taking pills daily in order to prevent an adverse CVD health outcome is approximately 0.997. PMID:25967985
Characteristic functions of quantum heat with baths at different temperatures
NASA Astrophysics Data System (ADS)
Aurell, Erik
2018-06-01
This paper is about quantum heat defined as the change in energy of a bath during a process. The presentation takes into account recent developments in classical strong-coupling thermodynamics and addresses a version of quantum heat that satisfies quantum-classical correspondence. The characteristic function and the full counting statistics of quantum heat are shown to be formally similar. The paper further shows that the method can be extended to more than one bath, e.g., two baths at different temperatures, which opens up the prospect of studying correlations and heat flow. The paper extends earlier results on the expected quantum heat in the setting of one bath [E. Aurell and R. Eichhorn, New J. Phys. 17, 065007 (2015), 10.1088/1367-2630/17/6/065007; E. Aurell, Entropy 19, 595 (2017), 10.3390/e19110595].
A man-made object detection for underwater TV
NASA Astrophysics Data System (ADS)
Cheng, Binbin; Wang, Wenwu; Chen, Yao
2018-03-01
It is a great challenging task to complete an automatic search of objects underwater. Usually the forward looking sonar is used to find the target, and then the initial identification of the target is completed by the side-scan sonar, and finally the confirmation of the target is accomplished by underwater TV. This paper presents an efficient method for automatic extraction of man-made sensitive targets in underwater TV. Firstly, the image of underwater TV is simplified with taking full advantage of the prior knowledge of the target and the background; then template matching technology is used for target detection; finally the target is confirmed by extracting parallel lines on the target contour. The algorithm is formulated for real-time execution on limited-memory commercial-of-the-shelf platforms and is capable of detection objects in underwater TV.
Process-aware EHR BPM systems: two prototypes and a conceptual framework.
Webster, Charles; Copenhaver, Mark
2010-01-01
Systematic methods to improve the effectiveness and efficiency of electronic health record-mediated processes will be key to EHRs playing an important role in the positive transformation of healthcare. Business process management (BPM) systematically optimizes process effectiveness, efficiency, and flexibility. Therefore BPM offers relevant ideas and technologies. We provide a conceptual model based on EHR productivity and negative feedback control that links EHR and BPM domains, describe two EHR BPM prototype modules, and close with the argument that typical EHRs must become more process-aware if they are to take full advantage of BPM ideas and technology. A prediction: Future extensible clinical groupware will coordinate delivery of EHR functionality to teams of users by combining modular components with executable process models whose usability (effectiveness, efficiency, and user satisfaction) will be systematically improved using business process management techniques.
Phase-space methods for the spin dynamics in condensed matter systems
Hurst, Jérôme; Manfredi, Giovanni
2017-01-01
Using the phase-space formulation of quantum mechanics, we derive a four-component Wigner equation for a system composed of spin- fermions (typically, electrons) including the Zeeman effect and the spin–orbit coupling. This Wigner equation is coupled to the appropriate Maxwell equations to form a self-consistent mean-field model. A set of semiclassical Vlasov equations with spin effects is obtained by expanding the full quantum model to first order in the Planck constant. The corresponding hydrodynamic equations are derived by taking velocity moments of the phase-space distribution function. A simple closure relation is proposed to obtain a closed set of hydrodynamic equations. This article is part of the themed issue ‘Theoretical and computational studies of non-equilibrium and non-statistical dynamics in the gas phase, in the condensed phase and at interfaces’. PMID:28320903
Optimization of engines for a commercial Mach 0.98 transport using advanced turbine cooling methods
NASA Technical Reports Server (NTRS)
Kraft, G. A.; Whitlow, J. B., Jr.
1972-01-01
A study was made of an advanced technology airplane using supercritical aerodynamics. Cruise Mach number was 0.98 at 40,000 feet altitude with a payload of 60,000 pounds and a range of 3000 nautical miles. Separate-flow turbofans were examined parametrically to determine the effect of sea-level-static design turbine-inlet-temperature and noise on takeoff gross weight (TOGW) assuming full-film turbine cooling. The optimum turbine inlet temperature was 2650 F. Two-stage-fan engines, with cruise fan pressure ratio of 2.25, achieved a noise goal of 103.5 EPNdB with todays noise technology while one-stage-fan engines, achieved a noise goal of 98 EPNdB. The take-off gross weight penalty to use the one-stage fan was 6.2 percent.
Searching for New Earths: Teaching Children How We Seek Distant Planets
NASA Astrophysics Data System (ADS)
Pulliam, C.
2008-06-01
Teaching science to children ages 8-13 can be a great challenge, especially if you lack the resources for a full-blown audio/visual presentation. How do you hold their attention and get them involved? One method is to teach a topic no one else covers at this educational level: something exciting and up-to-the-minute, at the cutting edge of science. We developed an interactive 45-minute presentation to convey the two basic techniques used to locate planets orbiting other stars. Activities allowed children to hunt for their own planets in simulated data sets. We also stimulated their imagination by giving each child a take-home, multicolored marble ``planet'' and asking them to discuss their planet's characteristics. The resulting presentation ``Searching for New Earths'' could be adapted to a variety of educational settings.
Selecting band combinations with thematic mapper data
NASA Technical Reports Server (NTRS)
Sheffield, C. A.
1983-01-01
A problem arises in making color composite images because there are 210 different possible color presentations of TM three-band images. A method is given for reducing that 210 to a single choice, decided by the statistics of a scene or subscene, and taking into full account any correlations that exist between different bands. Instead of using total variance as the measure for information content of the band triplets, the ellipsoid of maximum volume is selected which discourages selection of bands with high correlation. The band triplet is obtained by computing and ranking in order the determinants of each 3 x 3 principal submatrix of the original matrix M. After selection of the best triplet, the assignment of colors is made by using the actual variances (the diagonal elements of M): green (maximum variance), red (second largest variance), blue (smallest variance).
ERIC Educational Resources Information Center
Solomon, Brett Johnson; Garibaldi, Mark
2013-01-01
There is an overwhelming disconnect between young adolescent girls and adults, in relationship to perceptions of middle schoolgirl risk taking. This mixed-methods study investigates the differences between adult practitioners and middle school girls' perceptions of risk taking, understanding of consequences, and needs among middle school girls.…
Helicobacter Pylori Infections
... sure he takes the full course of these antibiotics as directed by your pediatrician. They are usually prescribed in combination with drugs called proton pump inhibitors or histamine receptor blockers that interfere with the production of acid in the stomach. What Is the ...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-30
... tsunami on infrastructure and fishing vessels. Some vessels have not been able to resume full time operations since the tsunami and other vessels which sustained damage are taking longer to resume operations...
77 FR 29986 - Savannah River Site Building 235-F Safety
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-21
... seismically-induced full-facility fire are greater than 10 rem offsite and 27,000 rem to the collocated worker... require intrusion into the cells). Take action, as necessary, to ensure that these systems are credited in...
2011-11-25
This view from NASA Wide-field Infrared Survey Explorer takes in an area of the sky in the constellation of Scorpius surrounding Jabbah Arabic name means the forehead of the scorpion which is larger than a grid of eight by eight full moons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreepathi, Sarat; D'Azevedo, Eduardo; Philip, Bobby
On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phasemore » of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.« less
Delay-and-sum beamforming for direction of arrival estimation applied to gunshot acoustics
NASA Astrophysics Data System (ADS)
Ramos, António L. L.; Holm, Sverre; Gudvangen, Sigmund; Otterlei, Ragnvald
2011-06-01
Sniper positioning systems described in the literature use a two-step algorithm to estimate the sniper's location. First, the shockwave and the muzzle blast acoustic signatures must be detected and recognized, followed by an estimation of their respective direction-of-arrival (DOA). Second, the actual sniper's position is calculated based on the estimated DOA via an iterative algorithm that varies from system to system. The overall performance of such a system, however, is highly compromised when the first step is not carried out successfully. Currently available systems rely on a simple calculation of differences of time-of-arrival to estimate angles-of-arrival. This approach, however, lacks robustness by not taking full advantage of the array of sensors. This paper shows how the delay-and-sum beamforming technique can be applied to estimate the DOA for both the shockwave and the muzzle blast. The method has the twofold advantage of 1) adding an array gain of 10 logM, i.e., an increased SNR of 6 dB for a 4-microphone array, which is equivalent to doubling the detection range assuming free-field propagation; and 2) offering improved robustness in handling single- and multi-shots events as well as reflections by taking advantage of the spatial filtering capability.
Hemphill, Ashton S; Shen, Yuecheng; Liu, Yan; Wang, Lihong V
2017-11-27
In biological applications, optical focusing is limited by the diffusion of light, which prevents focusing at depths greater than ∼1 mm in soft tissue. Wavefront shaping extends the depth by compensating for phase distortions induced by scattering and thus allows for focusing light through biological tissue beyond the optical diffusion limit by using constructive interference. However, due to physiological motion, light scattering in tissue is deterministic only within a brief speckle correlation time. In in vivo tissue, this speckle correlation time is on the order of milliseconds, and so the wavefront must be optimized within this brief period. The speed of digital wavefront shaping has typically been limited by the relatively long time required to measure and display the optimal phase pattern. This limitation stems from the low speeds of cameras, data transfer and processing, and spatial light modulators. While binary-phase modulation requiring only two images for the phase measurement has recently been reported, most techniques require at least three frames for the full-phase measurement. Here, we present a full-phase digital optical phase conjugation method based on off-axis holography for single-shot optical focusing through scattering media. By using off-axis holography in conjunction with graphics processing unit based processing, we take advantage of the single-shot full-phase measurement while using parallel computation to quickly reconstruct the phase map. With this system, we can focus light through scattering media with a system latency of approximately 9 ms, on the order of the in vivo speckle correlation time.
NASA Astrophysics Data System (ADS)
Hemphill, Ashton S.; Shen, Yuecheng; Liu, Yan; Wang, Lihong V.
2017-11-01
In biological applications, optical focusing is limited by the diffusion of light, which prevents focusing at depths greater than ˜1 mm in soft tissue. Wavefront shaping extends the depth by compensating for phase distortions induced by scattering and thus allows for focusing light through biological tissue beyond the optical diffusion limit by using constructive interference. However, due to physiological motion, light scattering in tissue is deterministic only within a brief speckle correlation time. In in vivo tissue, this speckle correlation time is on the order of milliseconds, and so the wavefront must be optimized within this brief period. The speed of digital wavefront shaping has typically been limited by the relatively long time required to measure and display the optimal phase pattern. This limitation stems from the low speeds of cameras, data transfer and processing, and spatial light modulators. While binary-phase modulation requiring only two images for the phase measurement has recently been reported, most techniques require at least three frames for the full-phase measurement. Here, we present a full-phase digital optical phase conjugation method based on off-axis holography for single-shot optical focusing through scattering media. By using off-axis holography in conjunction with graphics processing unit based processing, we take advantage of the single-shot full-phase measurement while using parallel computation to quickly reconstruct the phase map. With this system, we can focus light through scattering media with a system latency of approximately 9 ms, on the order of the in vivo speckle correlation time.
ERIC Educational Resources Information Center
Grover, Anita; Lam, Tai Ning; Hunt, C. Anthony
2008-01-01
We present a simulation tool to aid the study of basic pharmacology principles. By taking advantage of the properties of agent-based modeling, the tool facilitates taking a mechanistic approach to learning basic concepts, in contrast to the traditional empirical methods. Pharmacodynamics is a particular aspect of pharmacology that can benefit from…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-01
... permissible methods of taking, other means of effecting the least practicable adverse impact on the species or... available for public comment (see ADDRESSES) for this IHA. L-DEO, with research funding from the U.S... pattern over an ocean bottom instrument in shallow water. This method is neither practical nor valid in...
NASA Astrophysics Data System (ADS)
Lu, J.; Wakai, K.; Takahashi, S.; Shimizu, S.
2000-06-01
The algorithm which takes into account the effect of refraction of sound wave paths for acoustic computer tomography (CT) is developed. Incorporating the algorithm of refraction into ordinary CT algorithms which are based on Fourier transformation is very difficult. In this paper, the least-squares method, which is capable of considering the refraction effect, is employed to reconstruct the two-dimensional temperature distribution. The refraction effect is solved by writing a set of differential equations which is derived from Fermat's theorem and the calculus of variations. It is impossible to carry out refraction analysis and the reconstruction of temperature distribution simultaneously, so the problem is solved using the iteration method. The measurement field is assumed to take the shape of a circle and 16 speakers, also serving as the receivers, are set around it isometrically. The algorithm is checked through computer simulation with various kinds of temperature distributions. It is shown that the present method which takes into account the algorithm of the refraction effect can reconstruct temperature distributions with much greater accuracy than can methods which do not include the refraction effect.
Correia, T. M.
2016-01-01
Full-perovskite Pb0.87Ba0.1La0.02(Zr0.6Sn0.33Ti0.07)O3 (PBLZST) thin films were fabricated by a sol–gel method. These revealed both rhombohedral and tetragonal phases, as opposed to the full-tetragonal phase previously reported in ceramics. The fractions of tetragonal and rhombohedral phases are found to be strongly dependent on film thickness. The fraction of tetragonal grains increases with increasing film thickness, as the substrate constraint throughout the film decreases with film thickness. The maximum of the dielectric constant (εm) and the corresponding temperature (Tm) are thickness-dependent and dictated by the fraction of rhombohedral and tetragonal phase, with εm reaching a minimum at 400 nm and Tm shifting to higher temperature with increasing thickness. With the thickness increase, the breakdown field decreases, but field-induced antiferroelectric–ferroelectric (EAFE−FE) and ferroelectric–antiferroelectric (EFE−AFE) switch fields increase. The electrocaloric effect increases with increasing film thickness. This article is part of the themed issue ‘Taking the temperature of phase transitions in cool materials’. PMID:27402937
Sublattice parallel replica dynamics.
Martínez, Enrique; Uberuaga, Blas P; Voter, Arthur F
2014-06-01
Exascale computing presents a challenge for the scientific community as new algorithms must be developed to take full advantage of the new computing paradigm. Atomistic simulation methods that offer full fidelity to the underlying potential, i.e., molecular dynamics (MD) and parallel replica dynamics, fail to use the whole machine speedup, leaving a region in time and sample size space that is unattainable with current algorithms. In this paper, we present an extension of the parallel replica dynamics algorithm [A. F. Voter, Phys. Rev. B 57, R13985 (1998)] by combining it with the synchronous sublattice approach of Shim and Amar [ and , Phys. Rev. B 71, 125432 (2005)], thereby exploiting event locality to improve the algorithm scalability. This algorithm is based on a domain decomposition in which events happen independently in different regions in the sample. We develop an analytical expression for the speedup given by this sublattice parallel replica dynamics algorithm and compare it with parallel MD and traditional parallel replica dynamics. We demonstrate how this algorithm, which introduces a slight additional approximation of event locality, enables the study of physical systems unreachable with traditional methodologies and promises to better utilize the resources of current high performance and future exascale computers.
NASA Astrophysics Data System (ADS)
Bentouaf, Ali; Hassan, Fouad H.; Reshak, Ali H.; Aïssa, Brahim
2017-01-01
We report on the investigation of the structural and physical properties of the Co2VZ (Z = Al, Ga) Heusler alloys, with L21 structure, through first-principles calculations involving the full potential linearized augmented plane-wave method within density functional theory. These physical properties mainly revolve around the electronic, magnetic and thermodynamic properties. By using the Perdew-Burke-Ernzerhof generalized gradient approximation, the calculated lattice constants and spin magnetic moments were found to be in good agreement with the experimental data. Furthermore, the thermal effects using the quasi-harmonic Debye model have been investigated in depth while taking into account the lattice vibrations, the temperature and the pressure effects on the structural parameters. The heat capacities, the thermal expansion coefficient and the Debye temperatures have also been determined from the non-equilibrium Gibbs functions. An application of the atom in molecule theory is presented and discussed in order to analyze the bonding nature of the Heusler alloys. The focus is on the mixing of the metallic and covalent behavior of Co2VZ (Z = Al, Ga) Heusler alloys.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., based on the best scientific evidence available, that the total taking by the specified activity during... forth permissible methods of taking and other means of effecting the least practicable adverse impact on...
Code of Federal Regulations, 2011 CFR
2011-10-01
..., based on the best scientific evidence available, that the total taking by the specified activity during... forth permissible methods of taking and other means of effecting the least practicable adverse impact on...
JPRS Report, Soviet Union Military Affairs.
1988-02-29
similar article on life in a disciplinary battalion was published in SOVETSKIY VOIN in Rus- sian No 23, Dec 87 pp 14-16. The full text ofthat article... Computer " first two paragraphs are KRASNAYA ZVEZDA intro- duction.! [ Text ] On 15 November, Krasnaya Zvezda published S. Belkin’s article "Taking a...in Russian 15 Dec 87 p 1 [Article: "More Concern About People"] [ Text ] Winter is now in full force. Like a stern and impartial examiner, it checks
Friedrich, Mariola; Goluch-Koniuszy, Zuzanna; Dolot, Anna; Pilarczyk, Bogumiła
2011-01-01
The influence of diet ingredients and its supplementation with chosen B group vitamins on concentration of selenium in blood serum and tissues and activity of glutathione peroxidase in blood and liver of male rats was examined in the conducted experiment. The animals, aged 5 months, were divided into three groups and fed ad libitum with granulated mixes. Group I with basic mix containing among other things full grains, Group II with modified mix in which full grains were exchanged for wheat flour and in part with saccharose and Group III with modified mix supplemented in excess with vitamins B1, B2, B6 and PP. The experiment was conducted for six weeks during which the amount of consumed feeding stuff was calculated currently and once a week body mass of the animals was checked. When the experiment was finished the activity of GSH-Px was determined by spectrophotometric method in blood and liver whereas concentration of selenium in blood serum, muscles and in liver by fluorometric method. It was ascertained that the change of diet ingredients and its supplementation with chosen group B vitamins was in favour of lowering the amount of selenium in the examined tissues, and the decrease was not only the result of lower amount of the consumed element, but also of its increased usage, forced by the changes taking place under the influence of diet components and its supplementation.
Nonlinear analysis for dual-frequency concurrent energy harvesting
NASA Astrophysics Data System (ADS)
Yan, Zhimiao; Lei, Hong; Tan, Ting; Sun, Weipeng; Huang, Wenhu
2018-05-01
The dual-frequency responses of the hybrid energy harvester undergoing the base excitation and galloping were analyzed numerically. In this work, an approximate dual-frequency analytical method is proposed for the nonlinear analysis of such a system. To obtain the approximate analytical solutions of the full coupled distributed-parameter model, the forcing interactions is first neglected. Then, the electromechanical decoupled governing equation is developed using the equivalent structure method. The hybrid mechanical response is finally separated to be the self-excited and forced responses for deriving the analytical solutions, which are confirmed by the numerical simulations of the full coupled model. The forced response has great impacts on the self-excited response. The boundary of Hopf bifurcation is analytically determined by the onset wind speed to galloping, which is linearly increased by the electrical damping. Quenching phenomenon appears when the increasing base excitation suppresses the galloping. The theoretical quenching boundary depends on the forced mode velocity. The quenching region increases with the base acceleration and electrical damping, but decreases with the wind speed. Superior to the base-excitation-alone case, the existence of the aerodynamic force protects the hybrid energy harvester at resonance from damages caused by the excessive large displacement. From the view of the harvested power, the hybrid system surpasses the base-excitation-alone system or the galloping-alone system. This study advances our knowledge on intrinsic nonlinear dynamics of the dual-frequency energy harvesting system by taking advantage of the analytical solutions.
[Glass ceiling for women in academic medicine in France].
Rosso, C; Leger, A; Steichen, O
2018-06-03
To determine whether career development in academic medicine is more difficult for women than for men, and, if any, the nature and level of barriers to this progression. Extraction of full-time medical staff in a Parisian hospital group, through the SIGAPS platform; an online questionnaire survey of career choices and barriers experienced by full-time male and female physicians. The study population comprises 181 hospital practitioners and 141 academic physicians (49 associate professors and 92 full professors). Women represent 49% of the medical staff but 15% of full professors. This underrepresentation of women is more important among intensivists/anesthesiologists than technique-based specialists (such as radiologists, biologists…). There is no difference in scientific output, marital status and parenthood between women and men. On the other hand, there is a difference in attitudes highlighted by the EVAR risk-taking scale as well as in the burden of familial involvement and the prejudices felt by women during the academic selection process. The glass ceiling exists in one of the largest French hospital group. Career development principles promote merit, but should decrease the benefit of "masculine" attitudes in the competition for academic positions. Academic selection criteria should evolve to limit the disadvantage of women related to deeper familial involvement and less competitive strategies and risk-taking attitudes. Copyright © 2018 Société Nationale Française de Médecine Interne (SNFMI). Published by Elsevier SAS. All rights reserved.
Gang, Wei-juan; Wang, Xin; Wang, Fang; Dong, Guo-feng; Wu, Xiao-dong
2015-08-01
The national standard of "Regulations of Acupuncture-needle Manipulating Techniques" is one of the national Criteria of Acupuncturology for which a total of 22 items have been already established. In the process of formulation, a series of common and specific problems have been met. In the present paper, the authors expound these problems from 3 aspects, namely principles for formulation, methods for formulating criteria, and considerations about some problems. The formulating principles include selection and regulations of principles for technique classification and technique-related key factors. The main methods for formulating criteria are 1) taking the literature as the theoretical foundation, 2) taking the clinical practice as the supporting evidence, and 3) taking the expounded suggestions or conclusions through peer review.
Simplex volume analysis for finding endmembers in hyperspectral imagery
NASA Astrophysics Data System (ADS)
Li, Hsiao-Chi; Song, Meiping; Chang, Chein-I.
2015-05-01
Using maximal simplex volume as an optimal criterion for finding endmembers is a common approach and has been widely studied in the literature. Interestingly, very little work has been reported on how simplex volume is calculated. It turns out that the issue of calculating simplex volume is much more complicated and involved than what we may think. This paper investigates this issue from two different aspects, geometric structure and eigen-analysis. The geometric structure is derived from its simplex structure whose volume can be calculated by multiplying its base with its height. On the other hand, eigen-analysis takes advantage of the Cayley-Menger determinant to calculate the simplex volume. The major issue of this approach is that when the matrix is ill-rank where determinant is desired. To deal with this problem two methods are generally considered. One is to perform data dimensionality reduction to make the matrix to be of full rank. The drawback of this method is that the original volume has been shrunk and the found volume of a dimensionality-reduced simplex is not the real original simplex volume. Another is to use singular value decomposition (SVD) to find singular values for calculating simplex volume. The dilemma of this method is its instability in numerical calculations. This paper explores all of these three methods in simplex volume calculation. Experimental results show that geometric structure-based method yields the most reliable simplex volume.
NASA Astrophysics Data System (ADS)
Hadi, Fatemeh; Janbozorgi, Mohammad; Sheikhi, M. Reza H.; Metghalchi, Hameed
2016-10-01
The rate-controlled constrained-equilibrium (RCCE) method is employed to study the interactions between mixing and chemical reaction. Considering that mixing can influence the RCCE state, the key objective is to assess the accuracy and numerical performance of the method in simulations involving both reaction and mixing. The RCCE formulation includes rate equations for constraint potentials, density and temperature, which allows taking account of mixing alongside chemical reaction without splitting. The RCCE is a dimension reduction method for chemical kinetics based on thermodynamics laws. It describes the time evolution of reacting systems using a series of constrained-equilibrium states determined by RCCE constraints. The full chemical composition at each state is obtained by maximizing the entropy subject to the instantaneous values of the constraints. The RCCE is applied to a spatially homogeneous constant pressure partially stirred reactor (PaSR) involving methane combustion in oxygen. Simulations are carried out over a wide range of initial temperatures and equivalence ratios. The chemical kinetics, comprised of 29 species and 133 reaction steps, is represented by 12 RCCE constraints. The RCCE predictions are compared with those obtained by direct integration of the same kinetics, termed detailed kinetics model (DKM). The RCCE shows accurate prediction of combustion in PaSR with different mixing intensities. The method also demonstrates reduced numerical stiffness and overall computational cost compared to DKM.
Scan statistics with local vote for target detection in distributed system
NASA Astrophysics Data System (ADS)
Luo, Junhai; Wu, Qi
2017-12-01
Target detection has occupied a pivotal position in distributed system. Scan statistics, as one of the most efficient detection methods, has been applied to a variety of anomaly detection problems and significantly improves the probability of detection. However, scan statistics cannot achieve the expected performance when the noise intensity is strong, or the signal emitted by the target is weak. The local vote algorithm can also achieve higher target detection rate. After the local vote, the counting rule is always adopted for decision fusion. The counting rule does not use the information about the contiguity of sensors but takes all sensors' data into consideration, which makes the result undesirable. In this paper, we propose a scan statistics with local vote (SSLV) method. This method combines scan statistics with local vote decision. Before scan statistics, each sensor executes local vote decision according to the data of its neighbors and its own. By combining the advantages of both, our method can obtain higher detection rate in low signal-to-noise ratio environment than the scan statistics. After the local vote decision, the distribution of sensors which have detected the target becomes more intensive. To make full use of local vote decision, we introduce a variable-step-parameter for the SSLV. It significantly shortens the scan period especially when the target is absent. Analysis and simulations are presented to demonstrate the performance of our method.
NASA Astrophysics Data System (ADS)
Addink, Elisabeth A.; Van Coillie, Frieke M. B.; De Jong, Steven M.
2012-04-01
Traditional image analysis methods are mostly pixel-based and use the spectral differences of landscape elements at the Earth surface to classify these elements or to extract element properties from the Earth Observation image. Geographic object-based image analysis (GEOBIA) has received considerable attention over the past 15 years for analyzing and interpreting remote sensing imagery. In contrast to traditional image analysis, GEOBIA works more like the human eye-brain combination does. The latter uses the object's color (spectral information), size, texture, shape and occurrence to other image objects to interpret and analyze what we see. GEOBIA starts by segmenting the image grouping together pixels into objects and next uses a wide range of object properties to classify the objects or to extract object's properties from the image. Significant advances and improvements in image analysis and interpretation are made thanks to GEOBIA. In June 2010 the third conference on GEOBIA took place at the Ghent University after successful previous meetings in Calgary (2008) and Salzburg (2006). This special issue presents a selection of the 2010 conference papers that are worked out as full research papers for JAG. The papers cover GEOBIA applications as well as innovative methods and techniques. The topics range from vegetation mapping, forest parameter estimation, tree crown identification, urban mapping, land cover change, feature selection methods and the effects of image compression on segmentation. From the original 94 conference papers, 26 full research manuscripts were submitted; nine papers were selected and are presented in this special issue. Selection was done on the basis of quality and topic of the studies. The next GEOBIA conference will take place in Rio de Janeiro from 7 to 9 May 2012 where we hope to welcome even more scientists working in the field of GEOBIA.