Service and Methods Demonstration - Annual Report
DOT National Transportation Integrated Search
1975-11-01
This report contains a description of the Service and Methods Demonstration Program. Transit demonstration projects undertaken in previous years are reviewed. Recently completed and current demonstration projects are described and project results fro...
Aircraft Dynamic Modeling in Turbulence
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Cunninham, Kevin
2012-01-01
A method for accurately identifying aircraft dynamic models in turbulence was developed and demonstrated. The method uses orthogonal optimized multisine excitation inputs and an analytic method for enhancing signal-to-noise ratio for dynamic modeling in turbulence. A turbulence metric was developed to accurately characterize the turbulence level using flight measurements. The modeling technique was demonstrated in simulation, then applied to a subscale twin-engine jet transport aircraft in flight. Comparisons of modeling results obtained in turbulent air to results obtained in smooth air were used to demonstrate the effectiveness of the approach.
The design of a joined wing flight demonstrator aircraft
NASA Technical Reports Server (NTRS)
Smith, S. C.; Cliff, S. E.; Kroo, I. M.
1987-01-01
A joined-wing flight demonstrator aircraft has been developed at the NASA Ames Research Center in collaboration with ACA Industries. The aircraft is designed to utilize the fuselage, engines, and undercarriage of the existing NASA AD-1 flight demonstrator aircraft. The design objectives, methods, constraints, and the resulting aircraft design, called the JW-1, are presented. A wind-tunnel model of the JW-1 was tested in the NASA Ames 12-foot wind tunnel. The test results indicate that the JW-1 has satisfactory flying qualities for a flight demonstrator aircraft. Good agreement of test results with design predictions confirmed the validity of the design methods used for application to joined-wing configurations.
A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2015-01-01
A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.
Atlanta Integrated Fare Collection Demonstration
DOT National Transportation Integrated Search
1982-09-01
This report describes the evaluation results of the Atlanta Integrated Fare Collection Demonstration. One of the main purposes of the demonstration, which was funded through the UMTA Service and Methods Demonstration Program, was to assess the extent...
Kongskov, Rasmus Dalgas; Jørgensen, Jakob Sauer; Poulsen, Henning Friis; Hansen, Per Christian
2016-04-01
Classical reconstruction methods for phase-contrast tomography consist of two stages: phase retrieval and tomographic reconstruction. A novel algebraic method combining the two was suggested by Kostenko et al. [Opt. Express21, 12185 (2013)OPEXFF1094-408710.1364/OE.21.012185], and preliminary results demonstrated improved reconstruction compared with a given two-stage method. Using simulated free-space propagation experiments with a single sample-detector distance, we thoroughly compare the novel method with the two-stage method to address limitations of the preliminary results. We demonstrate that the novel method is substantially more robust toward noise; our simulations point to a possible reduction in counting times by an order of magnitude.
Lin, Guoping; Candela, Y; Tillement, O; Cai, Zhiping; Lefèvre-Seguin, V; Hare, J
2012-12-15
A method based on thermal bistability for ultralow-threshold microlaser optimization is demonstrated. When sweeping the pump laser frequency across a pump resonance, the dynamic thermal bistability slows down the power variation. The resulting line shape modification enables a real-time monitoring of the laser characteristic. We demonstrate this method for a functionalized microsphere exhibiting a submicrowatt laser threshold. This approach is confirmed by comparing the results with a step-by-step recording in quasi-static thermal conditions.
The effect of sampling techniques used in the multiconfigurational Ehrenfest method
NASA Astrophysics Data System (ADS)
Symonds, C.; Kattirtzi, J. A.; Shalashilin, D. V.
2018-05-01
In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.
The effect of sampling techniques used in the multiconfigurational Ehrenfest method.
Symonds, C; Kattirtzi, J A; Shalashilin, D V
2018-05-14
In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.
ERIC Educational Resources Information Center
Gilbert, George L., Ed.
1985-01-01
List of materials needed, procedures used, and results obtained are provided for two demonstrations. The first is an inexpensive and quick method for demonstrating column chromatography of plant pigments of spinach extract. The second is a demonstration of cathodic protection by impressed current. (JN)
Correlation of two bioadhesion assays: the everted sac technique and the CAHN microbalance.
Santos, C A; Jacob, J S; Hertzog, B A; Freedman, B D; Press, D L; Harnpicharnchai, P; Mathiowitz, E
1999-08-27
This contribution correlates two in vitro methods utilized to determine bioadhesion. One method, the everted intestinal sac technique, is a passive test for bioadhesion involving several polymer microspheres and a section of everted intestinal tissue. The other method, the CAHN microbalance, employs a CAHN dynamic contact angle analyzer with modified software to record the tensile forces measured as a single polymer microsphere is pulled from intestinal tissue. This study demonstrates that CAHN and everted sac experiments yield similar results when used to quantify the bioadhesive nature of polymer microsphere systems. A polymer showing high adhesion in one method also demonstrates high bioadhesion in the other method; polymers that exhibit high fracture strength and tensile work measurements with the CAHN microbalance also yield high binding percentages with the everted sac method. The polymers tested and reported here are poly(caprolactone) and different copolymer ratios of poly(fumaric-co-sebacic anhydride). The results of this correlation demonstrate that each method alone is a valuable indicator of bioadhesion.
Recruitment of black women with type 2 diabetes into a self-management intervention trial.
Newlin, Kelley; Melkus, Gail D'Eramo; Jefferson, Vanessa; Langerman, Susan; Womack, Julie; Chyun, Deborah
2006-01-01
The purpose of this study was to evaluate the relationship of recruitment methods to enrollment status in Black women with type 2 diabetes screened for entry into a randomized clinical trial (RCT). Using a cross-sectional study design with convenience sampling procedures, data were collected on recruitment methods to which the women responded (N=236). Results demonstrated that the RCT had a moderate overall recruitment rate of 46% and achieved only 84% of its projected accrual goal (N=109). Chi-square analysis demonstrated that enrollment outcomes varied significantly according to recruitment methods (P=.05). Recruitment methods such as community health fairs (77.8%), private practice referrals (75.0%), participant referrals (61.5%), community clinic referrals (44.6%), community advertising and marketing (40.9%), and chart review (40.4%) demonstrated variable enrollment yields. Results confirm previous findings that indicate that Black Americans may be successfully recruited into research studies at moderate rates when traditional recruitment methods are enhanced and integrated with more culturally sensitive methods. Lessons learned are considered.
Summaries of early materials processing in space experiments
NASA Technical Reports Server (NTRS)
Naumann, R. J.; Mason, D.
1979-01-01
Objectives, methods, and results of low-gravity materials processing experiments are summarized, and a bibliography of published results for each experiment is provided. Included are drop tower experiments, the Apollo demonstration experiments, the skylab experiments and demonstration experiments, and the Apollo-Soyuz experiments and demonstrations. The findings of these experiments in the fields of crystal growth, metallurgy, and fluid behavior are summarized.
Laboratory Demonstration of Low-Cost Method for Producing Thin Film on Nonconductors.
ERIC Educational Resources Information Center
Ebong, A. U.; And Others
1991-01-01
A low-cost procedure for metallizing a silicon p-n junction diode by electroless nickel plating is reported. The procedure demonstrates that expensive salts can be excluded without affecting the results. The experimental procedure, measurement, results, and discussion are included. (Author/KR)
3D geometric phase analysis and its application in 3D microscopic morphology measurement
NASA Astrophysics Data System (ADS)
Zhu, Ronghua; Shi, Wenxiong; Cao, Quankun; Liu, Zhanwei; Guo, Baoqiao; Xie, Huimin
2018-04-01
Although three-dimensional (3D) morphology measurement has been widely applied on the macro-scale, there is still a lack of 3D measurement technology on the microscopic scale. In this paper, a microscopic 3D measurement technique based on the 3D-geometric phase analysis (GPA) method is proposed. In this method, with machine vision and phase matching, the traditional GPA method is extended to three dimensions. Using this method, 3D deformation measurement on the micro-scale can be realized using a light microscope. Simulation experiments were conducted in this study, and the results demonstrate that the proposed method has a good anti-noise ability. In addition, the 3D morphology of the necking zone in a tensile specimen was measured, and the results demonstrate that this method is feasible.
Results of a nuclear power plant application of A New Technique for Human Error Analysis (ATHEANA)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitehead, D.W.; Forester, J.A.; Bley, D.C.
1998-03-01
A new method to analyze human errors has been demonstrated at a pressurized water reactor (PWR) nuclear power plant. This was the first application of the new method referred to as A Technique for Human Error Analysis (ATHEANA). The main goals of the demonstration were to test the ATHEANA process as described in the frame-of-reference manual and the implementation guideline, test a training package developed for the method, test the hypothesis that plant operators and trainers have significant insight into the error-forcing-contexts (EFCs) that can make unsafe actions (UAs) more likely, and to identify ways to improve the method andmore » its documentation. A set of criteria to evaluate the success of the ATHEANA method as used in the demonstration was identified. A human reliability analysis (HRA) team was formed that consisted of an expert in probabilistic risk assessment (PRA) with some background in HRA (not ATHEANA) and four personnel from the nuclear power plant. Personnel from the plant included two individuals from their PRA staff and two individuals from their training staff. Both individuals from training are currently licensed operators and one of them was a senior reactor operator on shift until a few months before the demonstration. The demonstration was conducted over a 5-month period and was observed by members of the Nuclear Regulatory Commission`s ATHEANA development team, who also served as consultants to the HRA team when necessary. Example results of the demonstration to date, including identified human failure events (HFEs), UAs, and EFCs are discussed. Also addressed is how simulator exercises are used in the ATHEANA demonstration project.« less
Nagell, K; Olguin, R S; Tomasello, M
1993-06-01
Common chimpanzees (Pan troglodytes) and 2-year-old human children (Homo sapiens) were presented with a rakelike tool and a desirable but out-of-reach object. One group of subjects observed a human demonstrator use the tool in one way, and another group observed a demonstrator use the tool in another way. Children in both cases did what the model did. Chimpanzee subjects, however, behaved identically in the 2 model conditions. Both groups performed better than subjects who saw no demonstration. This pattern of results suggest that the chimpanzees were paying attention to the general functional relations in the task and to the results obtained by the demonstrator but not to the actual methods of tool use demonstrated. Human children were focused on the demonstrator's actual methods of tool use (her behavior). The different social learning processes used by the 2 species have implications for their different forms of social organization.
Bootstrap Enhanced Penalized Regression for Variable Selection with Neuroimaging Data.
Abram, Samantha V; Helwig, Nathaniel E; Moodie, Craig A; DeYoung, Colin G; MacDonald, Angus W; Waller, Niels G
2016-01-01
Recent advances in fMRI research highlight the use of multivariate methods for examining whole-brain connectivity. Complementary data-driven methods are needed for determining the subset of predictors related to individual differences. Although commonly used for this purpose, ordinary least squares (OLS) regression may not be ideal due to multi-collinearity and over-fitting issues. Penalized regression is a promising and underutilized alternative to OLS regression. In this paper, we propose a nonparametric bootstrap quantile (QNT) approach for variable selection with neuroimaging data. We use real and simulated data, as well as annotated R code, to demonstrate the benefits of our proposed method. Our results illustrate the practical potential of our proposed bootstrap QNT approach. Our real data example demonstrates how our method can be used to relate individual differences in neural network connectivity with an externalizing personality measure. Also, our simulation results reveal that the QNT method is effective under a variety of data conditions. Penalized regression yields more stable estimates and sparser models than OLS regression in situations with large numbers of highly correlated neural predictors. Our results demonstrate that penalized regression is a promising method for examining associations between neural predictors and clinically relevant traits or behaviors. These findings have important implications for the growing field of functional connectivity research, where multivariate methods produce numerous, highly correlated brain networks.
Bootstrap Enhanced Penalized Regression for Variable Selection with Neuroimaging Data
Abram, Samantha V.; Helwig, Nathaniel E.; Moodie, Craig A.; DeYoung, Colin G.; MacDonald, Angus W.; Waller, Niels G.
2016-01-01
Recent advances in fMRI research highlight the use of multivariate methods for examining whole-brain connectivity. Complementary data-driven methods are needed for determining the subset of predictors related to individual differences. Although commonly used for this purpose, ordinary least squares (OLS) regression may not be ideal due to multi-collinearity and over-fitting issues. Penalized regression is a promising and underutilized alternative to OLS regression. In this paper, we propose a nonparametric bootstrap quantile (QNT) approach for variable selection with neuroimaging data. We use real and simulated data, as well as annotated R code, to demonstrate the benefits of our proposed method. Our results illustrate the practical potential of our proposed bootstrap QNT approach. Our real data example demonstrates how our method can be used to relate individual differences in neural network connectivity with an externalizing personality measure. Also, our simulation results reveal that the QNT method is effective under a variety of data conditions. Penalized regression yields more stable estimates and sparser models than OLS regression in situations with large numbers of highly correlated neural predictors. Our results demonstrate that penalized regression is a promising method for examining associations between neural predictors and clinically relevant traits or behaviors. These findings have important implications for the growing field of functional connectivity research, where multivariate methods produce numerous, highly correlated brain networks. PMID:27516732
Super resolution reconstruction of infrared images based on classified dictionary learning
NASA Astrophysics Data System (ADS)
Liu, Fei; Han, Pingli; Wang, Yi; Li, Xuan; Bai, Lu; Shao, Xiaopeng
2018-05-01
Infrared images always suffer from low-resolution problems resulting from limitations of imaging devices. An economical approach to combat this problem involves reconstructing high-resolution images by reasonable methods without updating devices. Inspired by compressed sensing theory, this study presents and demonstrates a Classified Dictionary Learning method to reconstruct high-resolution infrared images. It classifies features of the samples into several reasonable clusters and trained a dictionary pair for each cluster. The optimal pair of dictionaries is chosen for each image reconstruction and therefore, more satisfactory results is achieved without the increase in computational complexity and time cost. Experiments and results demonstrated that it is a viable method for infrared images reconstruction since it improves image resolution and recovers detailed information of targets.
Adaptive finite element methods for two-dimensional problems in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1994-01-01
Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.
An Improved Method for Demonstrating Visual Selection by Wild Birds.
ERIC Educational Resources Information Center
Allen, J. A.; And Others
1990-01-01
An activity simulating natural selection in which wild birds are predators, green and brown pastry "baits" are prey, and trays containing colored stones as the backgrounds is presented. Two different methods of measuring selection are used to describe the results. The materials and methods, results, and discussion are included. (KR)
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. A. Smith; D. L. Cottle; B. H. Rabin
2013-09-01
This report summarizes work conducted to-date on the implementation of new laser-based capabilities for characterization of bond strength in nuclear fuel plates, and presents preliminary results obtained from fresh fuel studies on as-fabricated monolithic fuel consisting of uranium-10 wt.% molybdenum alloys clad in 6061 aluminum by hot isostatic pressing. Characterization involves application of two complementary experimental methods, laser-shock testing and laser-ultrasonic imaging, collectively referred to as the Laser Shockwave Technique (LST), that allows the integrity, physical properties and interfacial bond strength in fuel plates to be evaluated. Example characterization results are provided, including measurement of layer thicknesses, elastic properties ofmore » the constituents, and the location and nature of generated debonds (including kissing bonds). LST provides spatially localized, non-contacting measurements with minimum specimen preparation, and is ideally suited for applications involving radioactive materials, including irradiated materials. The theoretical principles and experimental approaches employed in characterizing nuclear fuel plates are described, and preliminary bond strength measurement results are discussed, with emphasis on demonstrating the capabilities and limitations of these methods. These preliminary results demonstrate the ability to distinguish bond strength variations between different fuel plates. Although additional development work is necessary to validate and qualify the test methods, these results suggest LST is viable as a method to meet fuel qualification requirements to demonstrate acceptable bonding integrity.« less
Application of computational aerodynamics methods to the design and analysis of transport aircraft
NASA Technical Reports Server (NTRS)
Da Costa, A. L.
1978-01-01
The application and validation of several computational aerodynamic methods in the design and analysis of transport aircraft is established. An assessment is made concerning more recently developed methods that solve three-dimensional transonic flow and boundary layers on wings. Capabilities of subsonic aerodynamic methods are demonstrated by several design and analysis efforts. Among the examples cited are the B747 Space Shuttle Carrier Aircraft analysis, nacelle integration for transport aircraft, and winglet optimization. The accuracy and applicability of a new three-dimensional viscous transonic method is demonstrated by comparison of computed results to experimental data
Yin, Nina; Chen, Tao; Yu, Yuling; Han, Yongming; Yan, Fei; Zheng, Zhou; Chen, Zebin
2016-12-01
Successful islet isolation is crucial for islet transplantation and cell treatment for type 1 diabetes. Current isolation methods are able to obtain 500-1,000 islets per rat, which results in a waste of ≥50% of total islets. In the present study, a facile mechanical shaking method for improving islet yield (up to 1,500 per rat) was developed and summarized, which was demonstrated to be more effective than the existing well-established stationary method. The present results showed that isolated islets have a maximum yield of 1,326±152 when shaking for 15 min for the fully-cannulated pancreas. For both fully-cannulated and half-cannulated pancreas in the presence of rat DNAse inhibitor, the optimal shaking time was amended to 20 min with a further increased yield of 1,344±134 and 1,286±124 islets, respectively. Furthermore, the majority of the isolated islets were morphologically intact with a well-defined surface and almost no central necrotic zone, which suggested that the condition of islets obtained via the mechanical shaking method was consistent with the stationary method. Islet size distribution was also calculated and it was demonstrated that islets from the stationary method exhibited the same size distribution as the non-cannulated group, which had more larger islets than the fully-cannulated and half-cannulated groups isolated via the shaking method. In addition, the results of glucose challenge showed that the refraction index of each group was >2.5, which indicated the well-preserved function of isolated islets. Furthermore, the transplanted islets exhibited a therapeutic effect after 1 day of transplantation; however, they failed to control blood glucose levels after ~7 days of transplantation. In conclusion, these results demonstrated that the facile mechanical shaking method may markedly improve the yield of rat islet isolation, and in vitro and in vivo investigation demonstrated the well-preserved function of isolated islets in the control of blood glucose. Therefore, the facile mechanical shaking method may be an alternative improved procedure to obtain higher islet yield for islet preparation and transplantation in the treatment of type 1 diabetes.
Content Evaluation and Development of Videotapes Demonstrating Regional Anesthesia Motor Skills
ERIC Educational Resources Information Center
Warwick, Pamela M.; Ravin, Mark B.
1975-01-01
A study is reported which evaluated the content of three instructional videotapes designed to impart information and to demonstrate regional (spinal, epidural, and caudal) anesthesia motor skills. Pretest-posttest results demonstrated that the tapes successfully met predetermined criteria. Advantages of the method for medical student instruction…
NASA Astrophysics Data System (ADS)
Bu, Haifeng; Wang, Dansheng; Zhou, Pin; Zhu, Hongping
2018-04-01
An improved wavelet-Galerkin (IWG) method based on the Daubechies wavelet is proposed for reconstructing the dynamic responses of shear structures. The proposed method flexibly manages wavelet resolution level according to excitation, thereby avoiding the weakness of the wavelet-Galerkin multiresolution analysis (WGMA) method in terms of resolution and the requirement of external excitation. IWG is implemented by this work in certain case studies, involving single- and n-degree-of-freedom frame structures subjected to a determined discrete excitation. Results demonstrate that IWG performs better than WGMA in terms of accuracy and computation efficiency. Furthermore, a new method for parameter identification based on IWG and an optimization algorithm are also developed for shear frame structures, and a simultaneous identification of structural parameters and excitation is implemented. Numerical results demonstrate that the proposed identification method is effective for shear frame structures.
Solution-adaptive finite element method in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1993-01-01
Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.
Results of a Formal Methods Demonstration Project
NASA Technical Reports Server (NTRS)
Kelly, J.; Covington, R.; Hamilton, D.
1994-01-01
This paper describes the results of a cooperative study conducted by a team of researchers in formal methods at three NASA Centers to demonstrate FM techniques and to tailor them to critical NASA software systems. This pilot project applied FM to an existing critical software subsystem, the Shuttle's Jet Select subsystem (Phase I of an ongoing study). The present study shows that FM can be used successfully to uncover hidden issues in a highly critical and mature Functional Subsystem Software Requirements (FSSR) specification which are very difficult to discover by traditional means.
Engelberg, Jesse A.; Giberson, Richard T.; Young, Lawrence J.T.; Hubbard, Neil E.
2014-01-01
Microwave methods of fixation can dramatically shorten fixation times while preserving tissue structure; however, it remains unclear if adequate tissue antigenicity is preserved. To assess and validate antigenicity, robust quantitative methods and animal disease models are needed. We used two mouse mammary models of human breast cancer to evaluate microwave-assisted and standard 24-hr formalin fixation. The mouse models expressed four antigens prognostic for breast cancer outcome: estrogen receptor, progesterone receptor, Ki67, and human epidermal growth factor receptor 2. Using pathologist evaluation and novel methods of quantitative image analysis, we measured and compared the quality of antigen preservation, percentage of positive cells, and line plots of cell intensity. Visual evaluations by pathologists established that the amounts and patterns of staining were similar in tissues fixed by the different methods. The results of the quantitative image analysis provided a fine-grained evaluation, demonstrating that tissue antigenicity is preserved in tissues fixed using microwave methods. Evaluation of the results demonstrated that a 1-hr, 150-W fixation is better than a 45-min, 150-W fixation followed by a 15-min, 650-W fixation. The results demonstrated that microwave-assisted formalin fixation can standardize fixation times to 1 hr and produce immunohistochemistry that is in every way commensurate with longer conventional fixation methods. PMID:24682322
Effects of Barometric Fluctuations on Well Water-Level Measurements and Aquifer Test Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spane, Frank A.
1999-12-16
This report examines the effects of barometric fluctuations on well water-level measurements and evaluates adjustment and removal methods for determining areal aquifer head conditions and aquifer test analysis. Two examples of Hanford Site unconfined aquifer tests are examined that demonstrate baro-metric response analysis and illustrate the predictive/removal capabilities of various methods for well water-level and aquifer total head values. Good predictive/removal characteristics were demonstrated with best corrective results provided by multiple-regression deconvolution methods.
NASA Technical Reports Server (NTRS)
Aniversario, R. B.; Harvey, S. T.; Mccarty, J. E.; Parsons, J. T.; Peterson, D. C.; Pritchett, L. D.; Wilson, D. R.; Wogulis, E. R.
1983-01-01
The horizontal stabilizer of the 737 transport was redesigned. Five shipsets were fabricated using composite materials. Weight reduction greater than the 20% goal was achieved. Parts and assemblies were readily produced on production-type tooling. Quality assurance methods were demonstrated. Repair methods were developed and demonstrated. Strength and stiffness analytical methods were substantiated by comparison with test results. Cost data was accumulated in a semiproduction environment. FAA certification was obtained.
NASA Technical Reports Server (NTRS)
Sandford, M. C.; Abel, I.; Gray, D. L.
1975-01-01
The application of active control technology to suppress flutter was demonstrated successfully in the transonic dynamics tunnel with a delta-wing model. The model was a simplified version of a proposed supersonic transport wing design. An active flutter suppression method based on an aerodynamic energy criterion was verified by using three different control laws. The first two control laws utilized both leading-edge and trailing-edge active control surfaces, whereas the third control law required only a single trailing-edge active control surface. At a Mach number of 0.9 the experimental results demonstrated increases in the flutter dynamic pressure from 12.5 percent to 30 percent with active controls. Analytical methods were developed to predict both open-loop and closed-loop stability, and the results agreed reasonably well with the experimental results.
NASA Astrophysics Data System (ADS)
Greene, Patrick T.; Eldredge, Jeff D.; Zhong, Xiaolin; Kim, John
2016-07-01
In this paper, we present a method for performing uniformly high-order direct numerical simulations of high-speed flows over arbitrary geometries. The method was developed with the goal of simulating and studying the effects of complex isolated roughness elements on the stability of hypersonic boundary layers. The simulations are carried out on Cartesian grids with the geometries imposed by a third-order cut-stencil method. A fifth-order hybrid weighted essentially non-oscillatory scheme was implemented to capture any steep gradients in the flow created by the geometries and a third-order Runge-Kutta method is used for time advancement. A multi-zone refinement method was also utilized to provide extra resolution at locations with expected complex physics. The combination results in a globally fourth-order scheme in space and third order in time. Results confirming the method's high order of convergence are shown. Two-dimensional and three-dimensional test cases are presented and show good agreement with previous results. A simulation of Mach 3 flow over the logo of the Ubuntu Linux distribution is shown to demonstrate the method's capabilities for handling complex geometries. Results for Mach 6 wall-bounded flow over a three-dimensional cylindrical roughness element are also presented. The results demonstrate that the method is a promising tool for the study of hypersonic roughness-induced transition.
Parametric and experimental analysis using a power flow approach
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1988-01-01
Having defined and developed a structural power flow approach for the analysis of structure-borne transmission of structural vibrations, the technique is used to perform an analysis of the influence of structural parameters on the transmitted energy. As a base for comparison, the parametric analysis is first performed using a Statistical Energy Analysis approach and the results compared with those obtained using the power flow approach. The advantages of using structural power flow are thus demonstrated by comparing the type of results obtained by the two methods. Additionally, to demonstrate the advantages of using the power flow method and to show that the power flow results represent a direct physical parameter that can be measured on a typical structure, an experimental investigation of structural power flow is also presented. Results are presented for an L-shaped beam for which an analytical solution has already been obtained. Furthermore, the various methods available to measure vibrational power flow are compared to investigate the advantages and disadvantages of each method.
Progress in unstructured-grid methods development for unsteady aerodynamic applications
NASA Technical Reports Server (NTRS)
Batina, John T.
1992-01-01
The development of unstructured-grid methods for the solution of the equations of fluid flow and what was learned over the course of the research are summarized. The focus of the discussion is on the solution of the time-dependent Euler equations including spatial discretizations, temporal discretizations, and boundary conditions. An example calculation with an implicit upwind method using a CFL number of infinity is presented for the Boeing 747 aircraft. The results were obtained in less than one hour CPU time on a Cray-2 computer, thus, demonstrating the speed and robustness of the capability. Additional calculations for the ONERA M6 wing demonstrate the accuracy of the method through the good agreement between calculated results and experimental data for a standard transonic flow case.
Basis Selection for Wavelet Regression
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)
1998-01-01
A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.
Novel Method to Assess Arterial Insufficiency in Rodent Hindlimb
Ziegler, Matthew A.; DiStasi, Matthew R.; Miller, Steven J.; Dalsing, Michael C.; Unthank, Joseph L.
2015-01-01
Background Lack of techniques to assess maximal blood flow capacity thwarts the use of rodent models of arterial insufficiency to evaluate therapies for intermittent claudication. We evaluated femoral vein outflow (VO) in combination with stimulated muscle contraction as a potential method to assess functional hindlimb arterial reserve and therapeutic efficacy in a rodent model of subcritical limb ischemia. Materials and methods VO was measured with perivascular flow probes at rest and during stimulated calf muscle contraction in young healthy rats (Wistar Kyoto, WKY; lean Zucker, LZR) and rats with cardiovascular risk factors (Spontaneously Hypertensive, SHR; Obese Zucker, OZR) with acute and/or chronic femoral arterial occlusion. Therapeutic efficacy was assessed by administration of Ramipril or Losartan to SHR after femoral artery excision. Results VO measurement in WKY demonstrated the utility of this method to assess hindlimb perfusion at rest and during calf muscle contraction. While application to diseased models (OZR, SHR) demonstrated normal resting perfusion compared to contralateral limbs, a significant reduction in reserve capacity was uncovered with muscle stimulation. Administration of Ramipril and Losartan demonstrated significant improvement in functional arterial reserve. Conclusion The results demonstrate that this novel method to assess distal limb perfusion in small rodents with subcritical limb ischemia is sufficient to unmask perfusion deficits not apparent at rest, detect impaired compensation in diseased animal models with risk factors, and assess therapeutic efficacy. The approach provides a significant advance in methods to investigate potential mechanisms and novel therapies for subcritical limb ischemia in pre-clinical rodent models. PMID:26850199
A Unified Framework for Brain Segmentation in MR Images
Yazdani, S.; Yusof, R.; Karimian, A.; Riazi, A. H.; Bennamoun, M.
2015-01-01
Brain MRI segmentation is an important issue for discovering the brain structure and diagnosis of subtle anatomical changes in different brain diseases. However, due to several artifacts brain tissue segmentation remains a challenging task. The aim of this paper is to improve the automatic segmentation of brain into gray matter, white matter, and cerebrospinal fluid in magnetic resonance images (MRI). We proposed an automatic hybrid image segmentation method that integrates the modified statistical expectation-maximization (EM) method and the spatial information combined with support vector machine (SVM). The combined method has more accurate results than what can be achieved with its individual techniques that is demonstrated through experiments on both real data and simulated images. Experiments are carried out on both synthetic and real MRI. The results of proposed technique are evaluated against manual segmentation results and other methods based on real T1-weighted scans from Internet Brain Segmentation Repository (IBSR) and simulated images from BrainWeb. The Kappa index is calculated to assess the performance of the proposed framework relative to the ground truth and expert segmentations. The results demonstrate that the proposed combined method has satisfactory results on both simulated MRI and real brain datasets. PMID:26089978
Advanced Guidance and Control Methods for Reusable Launch Vehicles: Test Results
NASA Technical Reports Server (NTRS)
Hanson, John M.; Jones, Robert E.; Krupp, Don R.; Fogle, Frank R. (Technical Monitor)
2002-01-01
There are a number of approaches to advanced guidance and control (AG&C) that have the potential for achieving the goals of significantly increasing reusable launch vehicle (RLV) safety/reliability and reducing the cost. In this paper, we examine some of these methods and compare the results. We briefly introduce the various methods under test, list the test cases used to demonstrate that the desired results are achieved, show an automated test scoring method that greatly reduces the evaluation effort required, and display results of the tests. Results are shown for the algorithms that have entered testing so far.
Study Of Nondestructive Techniques For Testing Composites
NASA Technical Reports Server (NTRS)
Roth, D.; Kautz, H.; Draper, S.; Bansal, N.; Bowles, K.; Bashyam, M.; Bishop, C.
1995-01-01
Study evaluates some nondestructive methods for characterizing ceramic-, metal-, and polymer-matrix composite materials. Results demonstrated utility of two ultrasonic methods for obtaining quantitative data on microstructural anomalies in composite materials.
An Adaptive Instability Suppression Controls Method for Aircraft Gas Turbine Engine Combustors
NASA Technical Reports Server (NTRS)
Kopasakis, George; DeLaat, John C.; Chang, Clarence T.
2008-01-01
An adaptive controls method for instability suppression in gas turbine engine combustors has been developed and successfully tested with a realistic aircraft engine combustor rig. This testing was part of a program that demonstrated, for the first time, successful active combustor instability control in an aircraft gas turbine engine-like environment. The controls method is called Adaptive Sliding Phasor Averaged Control. Testing of the control method has been conducted in an experimental rig with different configurations designed to simulate combustors with instabilities of about 530 and 315 Hz. Results demonstrate the effectiveness of this method in suppressing combustor instabilities. In addition, a dramatic improvement in suppression of the instability was achieved by focusing control on the second harmonic of the instability. This is believed to be due to a phenomena discovered and reported earlier, the so called Intra-Harmonic Coupling. These results may have implications for future research in combustor instability control.
Lu, Ming-Yu; Li, Zhihong; Hwang, Shiaw-Min; Linju Yen, B; Lee, Gwo-Bin
2015-09-01
This study reports a robust method of gene transfection in a murine primary cell model by using a high-density electrodes network (HDEN). By demonstrating high cell viability after gene transfection and successful expression of transgenes including fluorescent proteins, the HDEN device shows great promise as a solution in which reprogramming efficiency using non-viral induction for generation of murine induced pluripotent stem cells (iPSCs) is optimized. High and steady transgene expression levels in host cells of iPSCs can be demonstrated using this method. Moreover, the HDEN device achieved successful gene transfection with a low voltage of less than 180 V while requiring relatively low cell numbers (less than 1.5 × 10(4) cells). The results are comparable to current conventional methods, demonstrating a reasonable fluorescent-plasmid transfection rate (42.4% in single transfection and 24.5% in triple transfection) and high cell viability of over 95%. The gene expression levels of each iPSC factor was measured to be over 10-fold higher than that reported in previous studies using a single mouse embryonic fibroblast cell. Our results demonstrate that the generation of iPSCs using HDEN transfection of plasmid DNA may be a feasible and safe alternative to using viral transfection methods in the near future.
Nishiuchi, Yukiko; Tamaru, Aki; Suzuki, Yasuhiko; Kitada, Seigo; Maekura, Ryoji; Tateishi, Yoshitaka; Niki, Mamiko; Ogura, Hisashi; Matsumoto, Sohkichi
2014-06-01
We previously demonstrated the colonization of Mycobacterium avium complex in bathrooms by the conventional culture method. In the present study, we aimed to directly detect M. avium organisms in the environment using loop-mediated isothermal amplification (LAMP), and to demonstrate the efficacy of LAMP by comparing the results with those obtained by culture. Our data showed that LAMP analysis has detection limits of 100 fg DNA/reaction for M. avium. Using an FTA(®) elute card, DNA templates were extracted from environmental samples from bathrooms in the residences of 29 patients with pulmonary M. avium disease. Of the 162 environmental samples examined, 143 (88%) showed identical results by both methods; 20 (12%) and 123 (76%) samples were positive and negative, respectively, for M. avium. Of the remaining 19 samples (12%), seven (5%) and 12 (7%) samples were positive by the LAMP and culture methods, respectively. All samples that contained over 20 colony forming units/primary isolation plate, as measured by the culture method, were also positive by the LAMP method. Our data demonstrate that the combination of the FTA elute card and LAMP can facilitate prompt detection of M. avium in the environment.
Wong, M S; Cheng, J C Y; Lo, K H
2005-04-01
The treatment effectiveness of the CAD/CAM method and the manual method in managing adolescent idiopathic scoliosis (AIS) was compared. Forty subjects were recruited with twenty subjects for each method. The clinical parameters namely Cobb's angle and apical vertebral rotation were evaluated at the pre-brace and the immediate in-brace visits. The results demonstrated that orthotic treatments rendered by the CAD/CAM method and the conventional manual method were effective in providing initial control of Cobb's angle. Significant decreases (p < 0.05) were found between the pre-brace and immediate in-brace visits for both methods. The mean reductions of Cobb's angle were 12.8 degrees (41.9%) for the CAD/CAM method and 9.8 degrees (32.1%) for the manual method. An initial control of the apical vertebral rotation was not shown in this study. In the comparison between the CAD/CAM method and the manual method, no significant difference was found in the control of Cobb's angle and apical vertebral rotation. The current study demonstrated that the CAD/CAM method can provide similar result in the initial stage of treatment as compared with the manual method.
Zooming in on vibronic structure by lowest-value projection reconstructed 4D coherent spectroscopy
NASA Astrophysics Data System (ADS)
Harel, Elad
2018-05-01
A fundamental goal of chemical physics is an understanding of microscopic interactions in liquids at and away from equilibrium. In principle, this microscopic information is accessible by high-order and high-dimensionality nonlinear optical measurements. Unfortunately, the time required to execute such experiments increases exponentially with the dimensionality, while the signal decreases exponentially with the order of the nonlinearity. Recently, we demonstrated a non-uniform acquisition method based on radial sampling of the time-domain signal [W. O. Hutson et al., J. Phys. Chem. Lett. 9, 1034 (2018)]. The four-dimensional spectrum was then reconstructed by filtered back-projection using an inverse Radon transform. Here, we demonstrate an alternative reconstruction method based on the statistical analysis of different back-projected spectra which results in a dramatic increase in sensitivity and at least a 100-fold increase in dynamic range compared to conventional uniform sampling and Fourier reconstruction. These results demonstrate that alternative sampling and reconstruction methods enable applications of increasingly high-order and high-dimensionality methods toward deeper insights into the vibronic structure of liquids.
Thomas, Yohann R J; Benayad, Anass; Schroder, Maxime; Morin, Arnaud; Pauchet, Joël
2015-07-15
The purpose of this article is to report a new method for the surface functionalization of commercially available gas diffusion layers (GDLs) by the electrochemical reduction of diazonium salt containing hydrophobic functional groups. The method results in superhydrophobic GDLs, over a large area, without pore blocking. An X-ray photoelectron spectroscopy study based on core level spectra and chemical mapping has demonstrated the successful grafting route, resulting in a homogeneous distribution of the covalently bonded hydrophobic molecules on the surface of the GDL fibers. The result was corroborated by contact angle measurement, showing similar hydrophobicity between the grafted and PTFE-modified GDLs. The electrochemically modified GDLs were tested in proton exchange membrane fuel cells under automotive, wet, and dry conditions and demonstrated improved performance over traditional GDLs.
NASA Astrophysics Data System (ADS)
Matsumoto, Takahiro; Nagata, Yasuaki; Nose, Tetsuro; Kawashima, Katsuhiro
2001-06-01
We show two kinds of demonstrations using a laser ultrasonic method. First, we present the results of Young's modulus of ceramics at temperatures above 1600 °C. Second, we introduce the method to determine the internal temperature distribution of a hot steel plate with errors of less than 3%. We compare the results obtained by this laser ultrasonic method with conventional contact techniques to show the validity of this method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Huiqiang; Wu, Xizeng, E-mail: xwu@uabmc.edu, E-mail: tqxiao@sinap.ac.cn; Xiao, Tiqiao, E-mail: xwu@uabmc.edu, E-mail: tqxiao@sinap.ac.cn
Purpose: Propagation-based phase-contrast CT (PPCT) utilizes highly sensitive phase-contrast technology applied to x-ray microtomography. Performing phase retrieval on the acquired angular projections can enhance image contrast and enable quantitative imaging. In this work, the authors demonstrate the validity and advantages of a novel technique for high-resolution PPCT by using the generalized phase-attenuation duality (PAD) method of phase retrieval. Methods: A high-resolution angular projection data set of a fish head specimen was acquired with a monochromatic 60-keV x-ray beam. In one approach, the projection data were directly used for tomographic reconstruction. In two other approaches, the projection data were preprocessed bymore » phase retrieval based on either the linearized PAD method or the generalized PAD method. The reconstructed images from all three approaches were then compared in terms of tissue contrast-to-noise ratio and spatial resolution. Results: The authors’ experimental results demonstrated the validity of the PPCT technique based on the generalized PAD-based method. In addition, the results show that the authors’ technique is superior to the direct PPCT technique as well as the linearized PAD-based PPCT technique in terms of their relative capabilities for tissue discrimination and characterization. Conclusions: This novel PPCT technique demonstrates great potential for biomedical imaging, especially for applications that require high spatial resolution and limited radiation exposure.« less
Calibrating the orientation between a microlens array and a sensor based on projective geometry
NASA Astrophysics Data System (ADS)
Su, Lijuan; Yan, Qiangqiang; Cao, Jun; Yuan, Yan
2016-07-01
We demonstrate a method for calibrating a microlens array (MLA) with a sensor component by building a plenoptic camera with a conventional prime lens. This calibration method includes a geometric model, a setup to adjust the distance (L) between the prime lens and the MLA, a calibration procedure for determining the subimage centers, and an optimization algorithm. The geometric model introduces nine unknown parameters regarding the centers of the microlenses and their images, whereas the distance adjustment setup provides an initial guess for the distance L. The simulation results verify the effectiveness and accuracy of the proposed method. The experimental results demonstrate the calibration process can be performed with a commercial prime lens and the proposed method can be used to quantitatively evaluate whether a MLA and a sensor is assembled properly for plenoptic systems.
Multiobjective Optimization of Rocket Engine Pumps Using Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
Oyama, Akira; Liou, Meng-Sing
2001-01-01
A design optimization method for turbopumps of cryogenic rocket engines has been developed. Multiobjective Evolutionary Algorithm (MOEA) is used for multiobjective pump design optimizations. Performances of design candidates are evaluated by using the meanline pump flow modeling method based on the Euler turbine equation coupled with empirical correlations for rotor efficiency. To demonstrate the feasibility of the present approach, a single stage centrifugal pump design and multistage pump design optimizations are presented. In both cases, the present method obtains very reasonable Pareto-optimal solutions that include some designs outperforming the original design in total head while reducing input power by one percent. Detailed observation of the design results also reveals some important design criteria for turbopumps in cryogenic rocket engines. These results demonstrate the feasibility of the EA-based design optimization method in this field.
E. Gardiner; J. Stanturf; T. Leininger; P. Hamel; L. Jr. Dorris; J. Portwood; J. Shepard
2008-01-01
As forest scientists increase their role in the process of science delivery, many research organizations are searching for novel methods to effectively build collaboration with managers to produce valued results. This article documents our experience with establishment of a forest restoration research and demonstration area in the Lower Mississippi Alluvial Valley (...
Uncertainty of fast biological radiation dose assessment for emergency response scenarios.
Ainsbury, Elizabeth A; Higueras, Manuel; Puig, Pedro; Einbeck, Jochen; Samaga, Daniel; Barquinero, Joan Francesc; Barrios, Lleonard; Brzozowska, Beata; Fattibene, Paola; Gregoire, Eric; Jaworska, Alicja; Lloyd, David; Oestreicher, Ursula; Romm, Horst; Rothkamm, Kai; Roy, Laurence; Sommer, Sylwester; Terzoudi, Georgia; Thierens, Hubert; Trompier, Francois; Vral, Anne; Woda, Clemens
2017-01-01
Reliable dose estimation is an important factor in appropriate dosimetric triage categorization of exposed individuals to support radiation emergency response. Following work done under the EU FP7 MULTIBIODOSE and RENEB projects, formal methods for defining uncertainties on biological dose estimates are compared using simulated and real data from recent exercises. The results demonstrate that a Bayesian method of uncertainty assessment is the most appropriate, even in the absence of detailed prior information. The relative accuracy and relevance of techniques for calculating uncertainty and combining assay results to produce single dose and uncertainty estimates is further discussed. Finally, it is demonstrated that whatever uncertainty estimation method is employed, ignoring the uncertainty on fast dose assessments can have an important impact on rapid biodosimetric categorization.
Demonstrating Sound Impulses in Pipes.
ERIC Educational Resources Information Center
Raymer, M. G.; Micklavzina, Stan
1995-01-01
Describes a simple, direct method to demonstrate the effects of the boundary conditions on sound impulse reflections in pipes. A graphical display of the results can be made using a pipe, cork, small hammer, microphone, and fast recording electronics. Explains the principles involved. (LZ)
Efficient path-based computations on pedigree graphs with compact encodings
2012-01-01
A pedigree is a diagram of family relationships, and it is often used to determine the mode of inheritance (dominant, recessive, etc.) of genetic diseases. Along with rapidly growing knowledge of genetics and accumulation of genealogy information, pedigree data is becoming increasingly important. In large pedigree graphs, path-based methods for efficiently computing genealogical measurements, such as inbreeding and kinship coefficients of individuals, depend on efficient identification and processing of paths. In this paper, we propose a new compact path encoding scheme on large pedigrees, accompanied by an efficient algorithm for identifying paths. We demonstrate the utilization of our proposed method by applying it to the inbreeding coefficient computation. We present time and space complexity analysis, and also manifest the efficiency of our method for evaluating inbreeding coefficients as compared to previous methods by experimental results using pedigree graphs with real and synthetic data. Both theoretical and experimental results demonstrate that our method is more scalable and efficient than previous methods in terms of time and space requirements. PMID:22536898
Automatic EEG artifact removal: a weighted support vector machine approach with error correction.
Shao, Shi-Yun; Shen, Kai-Quan; Ong, Chong Jin; Wilder-Smith, Einar P V; Li, Xiao-Ping
2009-02-01
An automatic electroencephalogram (EEG) artifact removal method is presented in this paper. Compared to past methods, it has two unique features: 1) a weighted version of support vector machine formulation that handles the inherent unbalanced nature of component classification and 2) the ability to accommodate structural information typically found in component classification. The advantages of the proposed method are demonstrated on real-life EEG recordings with comparisons made to several benchmark methods. Results show that the proposed method is preferable to the other methods in the context of artifact removal by achieving a better tradeoff between removing artifacts and preserving inherent brain activities. Qualitative evaluation of the reconstructed EEG epochs also demonstrates that after artifact removal inherent brain activities are largely preserved.
NASA Astrophysics Data System (ADS)
Zhang, Linna; Li, Gang; Sun, Meixiu; Li, Hongxiao; Wang, Zhennan; Li, Yingxin; Lin, Ling
2017-11-01
Identifying whole bloods to be either human or nonhuman is an important responsibility for import-export ports and inspection and quarantine departments. Analytical methods and DNA testing methods are usually destructive. Previous studies demonstrated that visible diffuse reflectance spectroscopy method can realize noncontact human and nonhuman blood discrimination. An appropriate method for calibration set selection was very important for a robust quantitative model. In this paper, Random Selection (RS) method and Kennard-Stone (KS) method was applied in selecting samples for calibration set. Moreover, proper stoichiometry method can be greatly beneficial for improving the performance of classification model or quantification model. Partial Least Square Discrimination Analysis (PLSDA) method was commonly used in identification of blood species with spectroscopy methods. Least Square Support Vector Machine (LSSVM) was proved to be perfect for discrimination analysis. In this research, PLSDA method and LSSVM method was used for human blood discrimination. Compared with the results of PLSDA method, this method could enhance the performance of identified models. The overall results convinced that LSSVM method was more feasible for identifying human and animal blood species, and sufficiently demonstrated LSSVM method was a reliable and robust method for human blood identification, and can be more effective and accurate.
Anderson, David M. G.; Floyd, Kyle A.; Barnes, Stephen; Clark, Judy M.; Clark, John I.; Mchaourab, Hassane; Schey, Kevin L.
2015-01-01
MALDI imaging requires careful sample preparation to obtain reliable, high quality images of small molecules, peptides, lipids, and proteins across tissue sections. Poor crystal formation, delocalization of analytes, and inadequate tissue adherence can affect the quality, reliability, and spatial resolution of MALDI images. We report a comparison of tissue mounting and washing methods that resulted in an optimized method using conductive carbon substrates that avoids thaw mounting or washing steps, minimizes protein delocalization, and prevents tissue detachment from the target surface. Application of this method to image ocular lens proteins of small vertebrate eyes demonstrates the improved methodology for imaging abundant crystallin protein products. This method was demonstrated for tissue sections from rat, mouse, and zebrafish lenses resulting in good quality MALDI images with little to no delocalization. The images indicate, for the first time in mouse and zebrafish, discrete localization of crystallin protein degradation products resulting in concentric rings of distinct protein contents that may be responsible for the refractive index gradient of vertebrate lenses. PMID:25665708
Engineering Aerothermal Analysis for X-34 Thermal Protection System Design
NASA Technical Reports Server (NTRS)
Wurster, Kathryn E.; Riley, Christopher J.; Zoby, E. Vincent
1998-01-01
Design of the thermal protection system for any hypersonic flight vehicle requires determination of both the peak temperatures over the surface and the heating-rate history along the flight profile. In this paper, the process used to generate the aerothermal environments required for the X-34 Testbed Technology Demonstrator thermal protection system design is described as it has evolved from a relatively simplistic approach based on engineering methods applied to critical areas to one of detailed analyses over the entire vehicle. A brief description of the trajectory development leading to the selection of the thermal protection system design trajectory is included. Comparisons of engineering heating predictions with wind-tunnel test data and with results obtained using a Navier-Stokes flowfield code and an inviscid/boundary layer method are shown. Good agreement is demonstrated among all these methods for both the ground-test condition and the peak heating flight condition. Finally, the detailed analysis using engineering methods to interpolate the surface-heating-rate results from the inviscid/boundary layer method to predict the required thermal environments is described and results presented.
Engineering Aerothermal Analysis for X-34 Thermal Protection System Design
NASA Technical Reports Server (NTRS)
Wurster, Kathryn E.; Riley, Christopher J.; Zoby, E. Vincent
1998-01-01
Design of the thermal protection system for any hypersonic flight vehicle requires determination of both the peak temperatures over the surface and the heating-rate history along the flight profile. In this paper, the process used to generate the aerothermal environments required for the X-34 Testbed Technology Demonstrator thermal protection system design is described as it has evolved from a relatively simplistic approach based on engineering methods applied to critical areas to one of detailed analyses over the entire vehicle. A brief description of the trajectory development leading to the selection of the thermal protection system design trajectory is included. Comparisons of engineering heating predictions with wind-tunnel test data and with results obtained using a Navier- Stokes flowfield code and an inviscid/boundary layer method are shown. Good agreement is demonstrated among all these methods for both the ground-test condition and the peak heating flight condition. Finally, the detailed analysis using engineering methods to interpolate the surface-heating-rate results from the inviscid/boundary layer method to predict the required thermal environments is described and results presented.
NASA Astrophysics Data System (ADS)
Winter, H.; Christopher-Allison, E.; Brown, A. L.; Goforth, A. M.
2018-04-01
Herein, we report an aerobic synthesis method to produce bismuth nanoparticles (Bi NPs) with average diameters in the range 40-80 nm using commercially available bismuth triiodide (BiI3) as the starting material; the method uses only readily available chemicals and conventional laboratory equipment. Furthermore, size data from replicates of the synthesis under standard reaction conditions indicate that this method is highly reproducible in achieving Bi NP populations with low standard deviations in the mean diameters. We also investigated the mechanism of the reaction, which we determined results from the reduction of a soluble alkylammonium iodobismuthate precursor species formed in situ. Under appropriate concentration conditions of iodobismuthate anion, we demonstrate that burst nucleation of Bi NPs results from reduction of Bi3+ by the coordinated, redox non-innocent iodide ligands when a threshold temperature is exceeded. Finally, we demonstrate phase transfer and silica coating of the Bi NPs, which results in stable aqueous colloids with retention of size, morphology, and colloidal stability. The resultant, high atomic number, hydrophilic Bi NPs prepared using this synthesis method have potential for application in emerging x-ray contrast and x-ray therapeutic applications.
Denisova, Galina F; Denisov, Dimitri A; Yeung, Jeffrey; Loeb, Mark B; Diamond, Michael S; Bramson, Jonathan L
2008-11-01
Understanding antibody function is often enhanced by knowledge of the specific binding epitope. Here, we describe a computer algorithm that permits epitope prediction based on a collection of random peptide epitopes (mimotopes) isolated by antibody affinity purification. We applied this methodology to the prediction of epitopes for five monoclonal antibodies against the West Nile virus (WNV) E protein, two of which exhibit therapeutic activity in vivo. This strategy was validated by comparison of our results with existing F(ab)-E protein crystal structures and mutational analysis by yeast surface display. We demonstrate that by combining the results of the mimotope method with our data from mutational analysis, epitopes could be predicted with greater certainty. The two methods displayed great complementarity as the mutational analysis facilitated epitope prediction when the results with the mimotope method were equivocal and the mimotope method revealed a broader number of residues within the epitope than the mutational analysis. Our results demonstrate that the combination of these two prediction strategies provides a robust platform for epitope characterization.
Aberration correction results in the IBM STEM instrument.
Batson, P E
2003-09-01
Results from the installation of aberration correction in the IBM 120 kV STEM argue that a sub-angstrom probe size has been achieved. Results and the experimental methods used to obtain them are described here. Some post-experiment processing is necessary to demonstrate the probe size of about 0.078 nm. While the promise of aberration correction is demonstrated, we remain at the very threshold of practicality, given the very stringent stability requirements.
EVALUATION OF OXYGEN-ENRICHED MSW/SEWAGE SLUDGE CO-INCINERATION DEMONSTRATION PROGRAM
This report provides an evaluation of a two-phased demonstration program conducted for the U.S. Environmental Protection Agency's Municipal Solid Waste Innovative Technology Evaluation Program, and the results thereof, of a recently developed method of sewage sludge managemen...
Marjanovič, Igor; Kandušer, Maša; Miklavčič, Damijan; Keber, Mateja Manček; Pavlin, Mojca
2014-12-01
In this study, we compared three different methods used for quantification of gene electrotransfer efficiency: fluorescence microscopy, flow cytometry and spectrofluorometry. We used CHO and B16 cells in a suspension and plasmid coding for GFP. The aim of this study was to compare and analyse the results obtained by fluorescence microscopy, flow cytometry and spectrofluorometry and in addition to analyse the applicability of spectrofluorometry for quantifying gene electrotransfer on cells in a suspension. Our results show that all the three methods detected similar critical electric field strength, around 0.55 kV/cm for both cell lines. Moreover, results obtained on CHO cells showed that the total fluorescence intensity and percentage of transfection exhibit similar increase in response to increase electric field strength for all the three methods. For B16 cells, there was a good correlation at low electric field strengths, but at high field strengths, flow cytometer results deviated from results obtained by fluorescence microscope and spectrofluorometer. Our study showed that all the three methods detected similar critical electric field strengths and high correlations of results were obtained except for B16 cells at high electric field strengths. The results also demonstrated that flow cytometry measures higher values of percentage transfection compared to microscopy. Furthermore, we have demonstrated that spectrofluorometry can be used as a simple and consistent method to determine gene electrotransfer efficiency on cells in a suspension.
Didactic satellite based on Android platform for space operation demonstration and development
NASA Astrophysics Data System (ADS)
Ben Bahri, Omar; Besbes, Kamel
2018-03-01
Space technology plays a pivotal role in society development. It offers new methods for telemetry, monitoring and control. However, this sector requires training, research and skills development but the lack of instruments, materials and budgets affects the ambiguity to understand satellite technology. The objective of this paper is to describe a demonstration prototype of a smart phone device for space operations study. Therefore, the first task was carried out to give a demonstration for spatial imagery and attitude determination missions through a wireless communication. The smart phone's Bluetooth was used to achieve this goal inclusive of a new method to enable real time transmission. In addition, an algorithm around a quaternion based Kalman filter was included in order to detect the reliability of the prototype's orientation. The second task was carried out to provide a demonstration for the attitude control mission using the smart phone's orientation sensor, including a new method for an autonomous guided mode. As a result, the acquisition platform showed real time measurement with good accuracy for orientation detection and image transmission. In addition, the prototype kept the balance during the demonstration based on the attitude control method.
Mining knowledge in noisy audio data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czyzewski, A.
1996-12-31
This paper demonstrates a KDD method applied to audio data analysis, particularly, it presents possibilities which result from replacing traditional methods of analysis and acoustic signal processing by KDD algorithms when restoring audio recordings affected by strong noise.
Chen, Cheng; Wang, Wei; Ozolek, John A.; Rohde, Gustavo K.
2013-01-01
We describe a new supervised learning-based template matching approach for segmenting cell nuclei from microscopy images. The method uses examples selected by a user for building a statistical model which captures the texture and shape variations of the nuclear structures from a given dataset to be segmented. Segmentation of subsequent, unlabeled, images is then performed by finding the model instance that best matches (in the normalized cross correlation sense) local neighborhood in the input image. We demonstrate the application of our method to segmenting nuclei from a variety of imaging modalities, and quantitatively compare our results to several other methods. Quantitative results using both simulated and real image data show that, while certain methods may work well for certain imaging modalities, our software is able to obtain high accuracy across several imaging modalities studied. Results also demonstrate that, relative to several existing methods, the template-based method we propose presents increased robustness in the sense of better handling variations in illumination, variations in texture from different imaging modalities, providing more smooth and accurate segmentation borders, as well as handling better cluttered nuclei. PMID:23568787
Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.
2016-01-01
A new engine cycle analysis tool, called Pycycle, was built using the OpenMDAO framework. Pycycle provides analytic derivatives allowing for an efficient use of gradient-based optimization methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.
Mapp, Latisha; Klonicki, Patricia; Takundwa, Prisca; Hill, Vincent R; Schneeberger, Chandra; Knee, Jackie; Raynor, Malik; Hwang, Nina; Chambers, Yildiz; Miller, Kenneth; Pope, Misty
2015-11-01
The U.S. Environmental Protection Agency's (EPA) Water Laboratory Alliance (WLA) currently uses ultrafiltration (UF) for concentration of biosafety level 3 (BSL-3) agents from large volumes (up to 100-L) of drinking water prior to analysis. Most UF procedures require comprehensive training and practice to achieve and maintain proficiency. As a result, there was a critical need to develop quality control (QC) criteria. Because select agents are difficult to work with and pose a significant safety hazard, QC criteria were developed using surrogates, including Enterococcus faecalis and Bacillus atrophaeus. This article presents the results from the QC criteria development study and results from a subsequent demonstration exercise in which E. faecalis was used to evaluate proficiency using UF to concentrate large volume drinking water samples. Based on preliminary testing EPA Method 1600 and Standard Methods 9218, for E. faecalis and B. atrophaeus respectively, were selected for use during the QC criteria development study. The QC criteria established for Method 1600 were used to assess laboratory performance during the demonstration exercise. Based on the results of the QC criteria study E. faecalis and B. atrophaeus can be used effectively to demonstrate and maintain proficiency using ultrafiltration. Published by Elsevier B.V.
Local Feature Selection for Data Classification.
Armanfard, Narges; Reilly, James P; Komeili, Majid
2016-06-01
Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.
One step linear reconstruction method for continuous wave diffuse optical tomography
NASA Astrophysics Data System (ADS)
Ukhrowiyah, N.; Yasin, M.
2017-09-01
The method one step linear reconstruction method for continuous wave diffuse optical tomography is proposed and demonstrated for polyvinyl chloride based material and breast phantom. Approximation which used in this method is selecting regulation coefficient and evaluating the difference between two states that corresponding to the data acquired without and with a change in optical properties. This method is used to recovery of optical parameters from measured boundary data of light propagation in the object. The research is demonstrated by simulation and experimental data. Numerical object is used to produce simulation data. Chloride based material and breast phantom sample is used to produce experimental data. Comparisons of results between experiment and simulation data are conducted to validate the proposed method. The results of the reconstruction image which is produced by the one step linear reconstruction method show that the image reconstruction almost same as the original object. This approach provides a means of imaging that is sensitive to changes in optical properties, which may be particularly useful for functional imaging used continuous wave diffuse optical tomography of early diagnosis of breast cancer.
NASA Astrophysics Data System (ADS)
Asfahani, J.; Tlas, M.
2015-10-01
An easy and practical method for interpreting residual gravity anomalies due to simple geometrically shaped models such as cylinders and spheres has been proposed in this paper. This proposed method is based on both the deconvolution technique and the simplex algorithm for linear optimization to most effectively estimate the model parameters, e.g., the depth from the surface to the center of a buried structure (sphere or horizontal cylinder) or the depth from the surface to the top of a buried object (vertical cylinder), and the amplitude coefficient from the residual gravity anomaly profile. The method was tested on synthetic data sets corrupted by different white Gaussian random noise levels to demonstrate the capability and reliability of the method. The results acquired show that the estimated parameter values derived by this proposed method are close to the assumed true parameter values. The validity of this method is also demonstrated using real field residual gravity anomalies from Cuba and Sweden. Comparable and acceptable agreement is shown between the results derived by this method and those derived from real field data.
On the accuracy of the LSC-IVR approach for excitation energy transfer in molecular aggregates
NASA Astrophysics Data System (ADS)
Teh, Hung-Hsuan; Cheng, Yuan-Chung
2017-04-01
We investigate the applicability of the linearized semiclassical initial value representation (LSC-IVR) method to excitation energy transfer (EET) problems in molecular aggregates by simulating the EET dynamics of a dimer model in a wide range of parameter regime and comparing the results to those obtained from a numerically exact method. It is found that the LSC-IVR approach yields accurate population relaxation rates and decoherence rates in a broad parameter regime. However, the classical approximation imposed by the LSC-IVR method does not satisfy the detailed balance condition, generally leading to incorrect equilibrium populations. Based on this observation, we propose a post-processing algorithm to solve the long time equilibrium problem and demonstrate that this long-time correction method successfully removed the deviations from exact results for the LSC-IVR method in all of the regimes studied in this work. Finally, we apply the LSC-IVR method to simulate EET dynamics in the photosynthetic Fenna-Matthews-Olson complex system, demonstrating that the LSC-IVR method with long-time correction provides excellent description of coherent EET dynamics in this typical photosynthetic pigment-protein complex.
Parameter Accuracy in Meta-Analyses of Factor Structures
ERIC Educational Resources Information Center
Gnambs, Timo; Staufenbiel, Thomas
2016-01-01
Two new methods for the meta-analysis of factor loadings are introduced and evaluated by Monte Carlo simulations. The direct method pools each factor loading individually, whereas the indirect method synthesizes correlation matrices reproduced from factor loadings. The results of the two simulations demonstrated that the accuracy of…
Evaluation of DuPont Qualicon Bax System PCR assay for yeast and mold.
Wallace, F Morgan; Burns, Frank; Fleck, Lois; Andaloro, Bridget; Farnum, Andrew; Tice, George; Ruebl, Joanne
2010-01-01
Evaluations were conducted to test the performance of the BAX System PCR assay which was certified as Performance Tested Method 010902 for screening yeast and mold in yogurt, corn starch, and milk-based powdered infant formula. Method comparison studies performed on samples with low-level inoculates showed that the BAX System demonstrates a sensitivity equivalent to the U.S. Food and Drug Administration's Bacteriological Analytical Manual culture method, but with a significantly shorter time to obtain results. Tests to evaluate inclusivity and exclusivity returned no false-negative and no false-positive results on a diverse panel of isolates, and tests for lot-to-lot variability and tablet stability demonstrated consistent performance. Ruggedness studies determined that none of the factors examined affected the performance of the assay.
Methods of Combinatorial Optimization to Reveal Factors Affecting Gene Length
Bolshoy, Alexander; Tatarinova, Tatiana
2012-01-01
In this paper we present a novel method for genome ranking according to gene lengths. The main outcomes described in this paper are the following: the formulation of the genome ranking problem, presentation of relevant approaches to solve it, and the demonstration of preliminary results from prokaryotic genomes ordering. Using a subset of prokaryotic genomes, we attempted to uncover factors affecting gene length. We have demonstrated that hyperthermophilic species have shorter genes as compared with mesophilic organisms, which probably means that environmental factors affect gene length. Moreover, these preliminary results show that environmental factors group together in ranking evolutionary distant species. PMID:23300345
An overview of computational simulation methods for composite structures failure and life analysis
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1993-01-01
Three parallel computational simulation methods are being developed at the LeRC Structural Mechanics Branch (SMB) for composite structures failure and life analysis: progressive fracture CODSTRAN; hierarchical methods for high-temperature composites; and probabilistic evaluation. Results to date demonstrate that these methods are effective in simulating composite structures failure/life/reliability.
Evaluating the utility of two gestural discomfort evaluation methods
Son, Minseok; Jung, Jaemoon; Park, Woojin
2017-01-01
Evaluating physical discomfort of designed gestures is important for creating safe and usable gesture-based interaction systems; yet, gestural discomfort evaluation has not been extensively studied in HCI, and few evaluation methods seem currently available whose utility has been experimentally confirmed. To address this, this study empirically demonstrated the utility of the subjective rating method after a small number of gesture repetitions (a maximum of four repetitions) in evaluating designed gestures in terms of physical discomfort resulting from prolonged, repetitive gesture use. The subjective rating method has been widely used in previous gesture studies but without empirical evidence on its utility. This study also proposed a gesture discomfort evaluation method based on an existing ergonomics posture evaluation tool (Rapid Upper Limb Assessment) and demonstrated its utility in evaluating designed gestures in terms of physical discomfort resulting from prolonged, repetitive gesture use. Rapid Upper Limb Assessment is an ergonomics postural analysis tool that quantifies the work-related musculoskeletal disorders risks for manual tasks, and has been hypothesized to be capable of correctly determining discomfort resulting from prolonged, repetitive gesture use. The two methods were evaluated through comparisons against a baseline method involving discomfort rating after actual prolonged, repetitive gesture use. Correlation analyses indicated that both methods were in good agreement with the baseline. The methods proposed in this study seem useful for predicting discomfort resulting from prolonged, repetitive gesture use, and are expected to help interaction designers create safe and usable gesture-based interaction systems. PMID:28423016
Schilling, Katherine; Applegate, Rachel
2012-01-01
Objectives and Background: Libraries are increasingly called upon to demonstrate student learning outcomes and the tangible benefits of library educational programs. This study reviewed and compared the efficacy of traditionally used measures for assessing library instruction, examining the benefits and drawbacks of assessment measures and exploring the extent to which knowledge, attitudes, and behaviors actually paralleled demonstrated skill levels. Methods: An overview of recent literature on the evaluation of information literacy education addressed these questions: (1) What evaluation measures are commonly used for evaluating library instruction? (2) What are the pros and cons of popular evaluation measures? (3) What are the relationships between measures of skills versus measures of attitudes and behavior? Research outcomes were used to identify relationships between measures of attitudes, behaviors, and skills, which are typically gathered via attitudinal surveys, written skills tests, or graded exercises. Results and Conclusions: Results provide useful information about the efficacy of instructional evaluation methods, including showing significant disparities between attitudes, skills, and information usage behaviors. This information can be used by librarians to implement the most appropriate evaluation methods for measuring important variables that accurately demonstrate students' attitudes, behaviors, or skills. PMID:23133325
Hong-Seng, Gan; Sayuti, Khairil Amir; Karim, Ahmad Helmy Abdul
2017-01-01
Existing knee cartilage segmentation methods have reported several technical drawbacks. In essence, graph cuts remains highly susceptible to image noise despite extended research interest; active shape model is often constraint by the selection of training data while shortest path have demonstrated shortcut problem in the presence of weak boundary, which is a common problem in medical images. The aims of this study is to investigate the capability of random walks as knee cartilage segmentation method. Experts would scribble on knee cartilage image to initialize random walks segmentation. Then, reproducibility of the method is assessed against manual segmentation by using Dice Similarity Index. The evaluation consists of normal cartilage and diseased cartilage sections which is divided into whole and single cartilage categories. A total of 15 normal images and 10 osteoarthritic images were included. The results showed that random walks method has demonstrated high reproducibility in both normal cartilage (observer 1: 0.83±0.028 and observer 2: 0.82±0.026) and osteoarthritic cartilage (observer 1: 0.80±0.069 and observer 2: 0.83±0.029). Besides, results from both experts were found to be consistent with each other, suggesting the inter-observer variation is insignificant (Normal: P=0.21; Diseased: P=0.15). The proposed segmentation model has overcame technical problems reported by existing semi-automated techniques and demonstrated highly reproducible and consistent results against manual segmentation method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fancher, Chris M.; Blendell, John E.; Bowman, Keith J.
2017-02-07
A method leveraging Rietveld full-pattern texture analysis to decouple induced domain texture from a preferred grain orientation is presented in this paper. The proposed method is demonstrated by determining the induced domain texture in a polar polymorph of 100 oriented 0.91Bi 1/2Na 1/2TiO 3-0.07BaTiO 3-0.02K 0.5Na 0.5NbO 3. Domain textures determined using the present method are compared with results obtained via single peak fitting. Texture determined using single peak fitting estimated more domain alignment than that determined using the Rietveld based method. These results suggest that the combination of grain texture and phase transitions can lead to single peak fittingmore » under or over estimating domain texture. Finally, while demonstrated for a bulk piezoelectric, the proposed method can be applied to quantify domain textures in multi-component systems and thin films.« less
Pavement crack detection combining non-negative feature with fast LoG in complex scene
NASA Astrophysics Data System (ADS)
Wang, Wanli; Zhang, Xiuhua; Hong, Hanyu
2015-12-01
Pavement crack detection is affected by much interference in the realistic situation, such as the shadow, road sign, oil stain, salt and pepper noise etc. Due to these unfavorable factors, the exist crack detection methods are difficult to distinguish the crack from background correctly. How to extract crack information effectively is the key problem to the road crack detection system. To solve this problem, a novel method for pavement crack detection based on combining non-negative feature with fast LoG is proposed. The two key novelties and benefits of this new approach are that 1) using image pixel gray value compensation to acquisit uniform image, and 2) combining non-negative feature with fast LoG to extract crack information. The image preprocessing results demonstrate that the method is indeed able to homogenize the crack image with more accurately compared to existing methods. A large number of experimental results demonstrate the proposed approach can detect the crack regions more correctly compared with traditional methods.
RADON MITIGATION STUDIES: NASHVILLE DEMONSTRATION
The report gives results of an EPA radon mitigation demonstration project involving 14 houses in the Nashville, TN, area with indoor radon levels of 5.6-47.6 pCi/L, using a variety of techniques, designed to be the most cost effective methods possible to implement, and yet adequa...
Molecular Diffusion Coefficients: Experimental Determination and Demonstration.
ERIC Educational Resources Information Center
Fate, Gwendolyn; Lynn, David G.
1990-01-01
Presented are laboratory methods which allow the demonstration and determination of the diffusion coefficients of compounds ranging in size from water to small proteins. Included are the procedures involving the use of a spectrometer, UV cell, triterated agar, and oxygen diffusion. Results including quantification are described. (CW)
MindEdit: A P300-based text editor for mobile devices.
Elsawy, Amr S; Eldawlatly, Seif; Taher, Mohamed; Aly, Gamal M
2017-01-01
Practical application of Brain-Computer Interfaces (BCIs) requires that the whole BCI system be portable. The mobility of BCI systems involves two aspects: making the electroencephalography (EEG) recording devices portable, and developing software applications with low computational complexity to be able to run on low computational-power devices such as tablets and smartphones. This paper addresses the development of MindEdit; a P300-based text editor for Android-based devices. Given the limited resources of mobile devices and their limited computational power, a novel ensemble classifier is utilized that uses Principal Component Analysis (PCA) features to identify P300 evoked potentials from EEG recordings. PCA computations in the proposed method are channel-based as opposed to concatenating all channels as in traditional feature extraction methods; thus, this method has less computational complexity compared to traditional P300 detection methods. The performance of the method is demonstrated on data recorded from MindEdit on an Android tablet using the Emotiv wireless neuroheadset. Results demonstrate the capability of the introduced PCA ensemble classifier to classify P300 data with maximum average accuracy of 78.37±16.09% for cross-validation data and 77.5±19.69% for online test data using only 10 trials per symbol and a 33-character training dataset. Our analysis indicates that the introduced method outperforms traditional feature extraction methods. For a faster operation of MindEdit, a variable number of trials scheme is introduced that resulted in an online average accuracy of 64.17±19.6% and a maximum bitrate of 6.25bit/min. These results demonstrate the efficacy of using the developed BCI application with mobile devices. Copyright © 2016 Elsevier Ltd. All rights reserved.
Silicon Field Effect Transistors as Dual-Use Sensor-Heater Hybrids
Reddy, Bobby; Elibol, Oguz H.; Nair, Pradeep R.; Dorvel, Brian R.; Butler, Felice; Ahsan, Zahab; Bergstrom, Donald E.; Alam, Muhammad A.; Bashir, Rashid
2011-01-01
We demonstrate the temperature mediated applications of a previously proposed novel localized dielectric heating method on the surface of dual purpose silicon field effect transistor (FET) sensor-heaters and perform modeling and characterization of the underlying mechanisms. The FETs are first shown to operate as electrical sensors via sensitivity to changes in pH in ionic fluids. The same devices are then demonstrated as highly localized heaters via investigation of experimental heating profiles and comparison to simulation results. These results offer further insight into the heating mechanism and help determine the spatial resolution of the technique. Two important biosensor platform applications spanning different temperature ranges are then demonstrated: a localized heat-mediated DNA exchange reaction and a method for dense selective functionalization of probe molecules via the heat catalyzed complete desorption and reattachment of chemical functionalization to the transistor surfaces. Our results show that the use of silicon transistors can be extended beyond electrical switching and field-effect sensing to performing localized temperature controlled chemical reactions on the transistor itself. PMID:21214189
Tutty, O.
2015-01-01
With the goal of providing the first example of application of a recently proposed method, thus demonstrating its ability to give results in principle, global stability of a version of the rotating Couette flow is examined. The flow depends on the Reynolds number and a parameter characterizing the magnitude of the Coriolis force. By converting the original Navier–Stokes equations to a finite-dimensional uncertain dynamical system using a partial Galerkin expansion, high-degree polynomial Lyapunov functionals were found by sum-of-squares of polynomials optimization. It is demonstrated that the proposed method allows obtaining the exact global stability limit for this flow in a range of values of the parameter characterizing the Coriolis force. Outside this range a lower bound for the global stability limit was obtained, which is still better than the energy stability limit. In the course of the study, several results meaningful in the context of the method used were also obtained. Overall, the results obtained demonstrate the applicability of the recently proposed approach to global stability of the fluid flows. To the best of our knowledge, it is the first case in which global stability of a fluid flow has been proved by a generic method for the value of a Reynolds number greater than that which could be achieved with the energy stability approach. PMID:26730219
On eco-efficient technologies to minimize industrial water consumption
NASA Astrophysics Data System (ADS)
Amiri, Mohammad C.; Mohammadifard, Hossein; Ghaffari, Ghasem
2016-07-01
Purpose - Water scarcity will further stress on available water systems and decrease the security of water in many areas. Therefore, innovative methods to minimize industrial water usage and waste production are of paramount importance in the process of extending fresh water resources and happen to be the main life support systems in many arid regions of the world. This paper demonstrates that there are good opportunities for many industries to save water and decrease waste water in softening process by substituting traditional with echo-friendly methods. The patented puffing method is an eco-efficient and viable technology for water saving and waste reduction in lime softening process. Design/methodology/approach - Lime softening process (LSP) is a very sensitive process to chemical reactions. In addition, optimal monitoring not only results in minimizing sludge that must be disposed of but also it reduces the operating costs of water conditioning. Weakness of the current (regular) control of LSP based on chemical analysis has been demonstrated experimentally and compared with the eco-efficient puffing method. Findings - This paper demonstrates that there is a good opportunity for many industries to save water and decrease waste water in softening process by substituting traditional method with puffing method, a patented eco-efficient technology. Originality/value - Details of the required innovative works to minimize industrial water usage and waste production are outlined in this paper. Employing the novel puffing method for monitoring of lime softening process results in saving a considerable amount of water while reducing chemical sludge.
Current Options for the Treatment of Food Allergy
Lanser, Bruce J.; Wright, Benjamin L.; Orgel, Kelly A.; Vickery, Brian P.; Fleischer, David M.
2016-01-01
Food allergy is increasing in prevalence; as a result, there is intense focus on developing safe and effective therapies. Current methods of specific immunotherapy include oral, sublingual, and epicutaneous, while nonspecific methods that have been investigated include: Chinese herbal medicine, probiotics, and anti-IgE antibodies. Although some studies have demonstrated efficacy in inducing desensitization, questions regarding safety and the potential for achieving immune tolerance remain. Although some of these therapies demonstrate promise, further investigation is required before their incorporation into routine clinical practice. PMID:26456449
Field Evaluation of Advanced Methods of Subsurface Exploration for Transit Tunneling
DOT National Transportation Integrated Search
1980-06-01
This report presents the results of a field evaluation of advanced methods of subsurface exploration on an ongoing urban rapid transit tunneling project. The objective of this study is to evaluate, through a field demonstration project, the feasibili...
Free energy computations employing Jarzynski identity and Wang – Landau algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalyan, M. Suman, E-mail: maroju.sk@gmail.com; Murthy, K. P. N.; School of Physics, University of Hyderabad, Hyderabad, Telangana, India – 500046
We introduce a simple method to compute free energy differences employing Jarzynski identity in conjunction with Wang – Landau algorithm. We demonstrate this method on Ising spin system by comparing the results with those obtained from canonical sampling.
Project-Method Fit: Exploring Factors That Influence Agile Method Use
ERIC Educational Resources Information Center
Young, Diana K.
2013-01-01
While the productivity and quality implications of agile software development methods (SDMs) have been demonstrated, research concerning the project contexts where their use is most appropriate has yielded less definitive results. Most experts agree that agile SDMs are not suited for all project contexts. Several project and team factors have been…
Parametric and experimental analysis using a power flow approach
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1990-01-01
A structural power flow approach for the analysis of structure-borne transmission of vibrations is used to analyze the influence of structural parameters on transmitted power. The parametric analysis is also performed using the Statistical Energy Analysis approach and the results are compared with those obtained using the power flow approach. The advantages of structural power flow analysis are demonstrated by comparing the type of results that are obtained by the two analytical methods. Also, to demonstrate that the power flow results represent a direct physical parameter that can be measured on a typical structure, an experimental study of structural power flow is presented. This experimental study presents results for an L shaped beam for which an available solution was already obtained. Various methods to measure vibrational power flow are compared to study their advantages and disadvantages.
The deconvolution of complex spectra by artificial immune system
NASA Astrophysics Data System (ADS)
Galiakhmetova, D. I.; Sibgatullin, M. E.; Galimullin, D. Z.; Kamalova, D. I.
2017-11-01
An application of the artificial immune system method for decomposition of complex spectra is presented. The results of decomposition of the model contour consisting of three components, Gaussian contours, are demonstrated. The method of artificial immune system is an optimization method, which is based on the behaviour of the immune system and refers to modern methods of search for the engine optimization.
Petersson, N. Anders; Sjogreen, Bjorn
2015-07-20
We develop a fourth order accurate finite difference method for solving the three-dimensional elastic wave equation in general heterogeneous anisotropic materials on curvilinear grids. The proposed method is an extension of the method for isotropic materials, previously described in the paper by Sjögreen and Petersson (2012) [11]. The method we proposed discretizes the anisotropic elastic wave equation in second order formulation, using a node centered finite difference method that satisfies the principle of summation by parts. The summation by parts technique results in a provably stable numerical method that is energy conserving. Also, we generalize and evaluate the super-grid far-fieldmore » technique for truncating unbounded domains. Unlike the commonly used perfectly matched layers (PML), the super-grid technique is stable for general anisotropic material, because it is based on a coordinate stretching combined with an artificial dissipation. Moreover, the discretization satisfies an energy estimate, proving that the numerical approximation is stable. We demonstrate by numerical experiments that sufficiently wide super-grid layers result in very small artificial reflections. Applications of the proposed method are demonstrated by three-dimensional simulations of anisotropic wave propagation in crystals.« less
On-chip Brownian relaxation measurements of magnetic nanobeads in the time domain
NASA Astrophysics Data System (ADS)
Østerberg, Frederik Westergaard; Rizzi, Giovanni; Hansen, Mikkel Fougt
2013-06-01
We present and demonstrate a new method for on-chip Brownian relaxation measurements on magnetic nanobeads in the time domain using magnetoresistive sensors. The beads are being magnetized by the sensor self-field arising from the bias current passed through the sensors and thus no external magnetic fields are needed. First, the method is demonstrated on Brownian relaxation measurements of beads with nominal sizes of 40, 80, 130, and 250 nm. The results are found to compare well to those obtained by an already established measurement technique in the frequency domain. Next, we demonstrate the time and frequency domain methods on Brownian relaxation detection of clustering of streptavidin coated magnetic beads in the presence of different concentrations of biotin-conjugated bovine serum albumin and obtain comparable results. In the time domain, a measurement is carried out in less than 30 s, which is about six times faster than in the frequency domain. This substantial reduction of the measurement time allows for continuous monitoring of the bead dynamics vs. time and opens for time-resolved studies, e.g., of binding kinetics.
Solenoid-free plasma startup in NSTX using transient CHI
NASA Astrophysics Data System (ADS)
Raman, R.; Jarboe, T. R.; Mueller, D.; Nelson, B. A.; Bell, M. G.; Bell, R.; Gates, D.; Gerhardt, S.; Hosea, J.; Kaita, R.; Kugel, H.; LeBlanc, B.; Maingi, R.; Maqueda, R.; Menard, J.; Nagata, M.; Ono, M.; Paul, S.; Roquemore, L.; Sabbagh, S.; Soukhanovskii, V.; Taylor, G.
2009-06-01
Experiments in NSTX have now demonstrated the coupling of toroidal plasmas produced by the technique of coaxial helicity injection (CHI) to inductive sustainment and ramp-up of the toroidal plasma current. In these discharges, the central Ohmic transformer was used to apply an inductive loop voltage to discharges with a toroidal current of about 100 kA created by CHI. The coupled discharges have ramped up to >700 kA and transitioned into an H-mode demonstrating compatibility of this startup method with conventional operation. The electron temperature in the coupled discharges reached over 800 eV and the resulting plasma had low inductance, which is preferred for long-pulse high-performance discharges. These results from NSTX in combination with the previously obtained record 160 kA non-inductively generated startup currents in an ST or tokamak in NSTX demonstrate that CHI is a viable solenoid-free plasma startup method for future STs and tokamaks.
NASA Astrophysics Data System (ADS)
Vainshtein, Igor; Baruch, Shlomi; Regev, Itai; Segal, Victor; Filis, Avishai; Riabzev, Sergey
2018-05-01
The growing demand for EO applications that work around the clock 24hr/7days a week, such as in border surveillance systems, emphasizes the need for a highly reliable cryocooler having increased operational availability and optimized system's Integrated Logistic Support (ILS). In order to meet this need, RICOR developed linear and rotary cryocoolers which achieved successfully this goal. Cryocoolers MTTF was analyzed by theoretical reliability evaluation methods, demonstrated by normal and accelerated life tests at Cryocooler level and finally verified by field data analysis derived from Cryocoolers operating at system level. The following paper reviews theoretical reliability analysis methods together with analyzing reliability test results derived from standard and accelerated life demonstration tests performed at Ricor's advanced reliability laboratory. As a summary for the work process, reliability verification data will be presented as a feedback from fielded systems.
Macarthur, Roy; Feinberg, Max; Bertheau, Yves
2010-01-01
A method is presented for estimating the size of uncertainty associated with the measurement of products derived from genetically modified organisms (GMOs). The method is based on the uncertainty profile, which is an extension, for the estimation of uncertainty, of a recent graphical statistical tool called an accuracy profile that was developed for the validation of quantitative analytical methods. The application of uncertainty profiles as an aid to decision making and assessment of fitness for purpose is also presented. Results of the measurement of the quantity of GMOs in flour by PCR-based methods collected through a number of interlaboratory studies followed the log-normal distribution. Uncertainty profiles built using the results generally give an expected range for measurement results of 50-200% of reference concentrations for materials that contain at least 1% GMO. This range is consistent with European Network of GM Laboratories and the European Union (EU) Community Reference Laboratory validation criteria and can be used as a fitness for purpose criterion for measurement methods. The effect on the enforcement of EU labeling regulations is that, in general, an individual analytical result needs to be < 0.45% to demonstrate compliance, and > 1.8% to demonstrate noncompliance with a labeling threshold of 0.9%.
Evaluating learning and teaching using the Force Concept Inventory
NASA Astrophysics Data System (ADS)
Zitzewitz, Paul
1997-04-01
Teaching methods used in the calculus-based mechanics course for engineers and scientists (P150) at the University of Michigan-Dearborn were markedly changed in September, 1996. Lectures emphasize active learning with Mazur's ConcepTests, Sokoloff's Interactive Demonstrations, and Van Heuvelen's ALPS Kit worksheets. Students solve context-rich problems using Van Heuvelen's multiple representation format in cooperative groups in discussion sections. Labs were changed to use MBL emphasizing concepts and Experiment Problems to learn lab-based problem solving. Pre- and post-testing of 400 students with the Force Concept Inventory has demonstrated considerable success. The average increase in score has been 35-45methods as defined by Hake. The methods and results will be discussed. Detailed analyses of the FCI results will look at success in teaching specific concepts and the effect of student preparation in mathematics and high school physics.
DeAmicis, P A
1997-01-01
A study was conducted to compare the effectiveness of interactive videodisc instruction (IVDI) with the traditional lecture/demonstration as an alternative method for learning and performing a critical nursing skill. Students were assigned randomly to a treatment group that worked in small groups to complete the IVDI on intravenous therapy skills and a control group receiving the same content in a classroom lecture/demonstration format. After the instruction, each subject performed a re-demonstration of the learned skills using specific guidelines. Results revealed that although the IVDI group scored higher on the overall re-demonstration, there was no significant difference in the ability of the two groups to effectively perform this critical nursing skill. These findings support the use of IVDI as an alternative self-paced, independent study method for learning psychomotor skills and are consistent with previous studies, which indicate that working in small groups on the computer has a positive effect on self-efficacy and achievement.
Electrical Resistivity Imaging for Long-Term Monitoring of Contaminant Degradation
The results from this experiment strongly suggest that the resistivity changes seen are the results of the biodegradation of the oil. This conclusion was further supported by the results of the microcosm experiment. These results demonstrate the utility of the resistivity method ...
High Precision Optical Observations of Space Debris in the Geo Ring from Venezuela
NASA Astrophysics Data System (ADS)
Lacruz, E.; Abad, C.; Downes, J. J.; Casanova, D.; Tresaco, E.
2018-01-01
We present preliminary results to demonstrate that our method for detection and location of Space Debris (SD) in the geostationary Earth orbit (GEO) ring, based on observations at the OAN of Venezuela is of high astrometric precision. A detailed explanation of the method, its validation and first results is available in (Lacruz et al. 2017).
Daneshkhah, Ali; Shrestha, Sudhir; Siegel, Amanda; Varahramyan, Kody; Agarwal, Mangilal
2017-03-15
Two methods for cross-selectivity enhancement of porous poly(vinylidene fluoride-hexafluoropropylene) (PVDF-HFP)/carbon black (CB) composite-based resistive sensors are provided. The sensors are tested with acetone and ethanol in the presence of humid air. Cross-selectivity is enhanced using two different methods to modify the basic response of the PVDF-HFP/CB sensing platform. In method I, the adsorption properties of PVDF-HFP/CB are altered by adding a polyethylene oxide (PEO) layer or by treating with infrared (IR). In method II, the effects of the interaction of acetone and ethanol are enhanced by adding diethylene carbonate (DEC) or PEO dispersed in DEC (PEO/DEC) to the film. The results suggest the approaches used in method I alter the composite ability to adsorb acetone and ethanol, while in method II, they alter the transduction characteristics of the composite. Using these approaches, sensor relative response to acetone was increased by 89% compared with the PVDF-HFP/CB untreated film, whereas sensor relative response to ethanol could be decreased by 57% or increased by 197%. Not only do these results demonstrate facile methods for increasing sensitivity of PVDF-HFP/CB film, used in parallel they demonstrate a roadmap for enhancing system cross-selectivity that can be applied to separate units on an array. Fabrication methods, experimental procedures and results are presented and discussed.
Daneshkhah, Ali; Shrestha, Sudhir; Siegel, Amanda; Varahramyan, Kody; Agarwal, Mangilal
2017-01-01
Two methods for cross-selectivity enhancement of porous poly(vinylidene fluoride-hexafluoropropylene) (PVDF-HFP)/carbon black (CB) composite-based resistive sensors are provided. The sensors are tested with acetone and ethanol in the presence of humid air. Cross-selectivity is enhanced using two different methods to modify the basic response of the PVDF-HFP/CB sensing platform. In method I, the adsorption properties of PVDF-HFP/CB are altered by adding a polyethylene oxide (PEO) layer or by treating with infrared (IR). In method II, the effects of the interaction of acetone and ethanol are enhanced by adding diethylene carbonate (DEC) or PEO dispersed in DEC (PEO/DEC) to the film. The results suggest the approaches used in method I alter the composite ability to adsorb acetone and ethanol, while in method II, they alter the transduction characteristics of the composite. Using these approaches, sensor relative response to acetone was increased by 89% compared with the PVDF-HFP/CB untreated film, whereas sensor relative response to ethanol could be decreased by 57% or increased by 197%. Not only do these results demonstrate facile methods for increasing sensitivity of PVDF-HFP/CB film, used in parallel they demonstrate a roadmap for enhancing system cross-selectivity that can be applied to separate units on an array. Fabrication methods, experimental procedures and results are presented and discussed. PMID:28294961
Camera-based micro interferometer for distance sensing
NASA Astrophysics Data System (ADS)
Will, Matthias; Schädel, Martin; Ortlepp, Thomas
2017-12-01
Interference of light provides a high precision, non-contact and fast method for measurement method for distances. Therefore this technology dominates in high precision systems. However, in the field of compact sensors capacitive, resistive or inductive methods dominates. The reason is, that the interferometric system has to be precise adjusted and needs a high mechanical stability. As a result, we have usual high-priced complex systems not suitable in the field of compact sensors. To overcome these we developed a new concept for a very small interferometric sensing setup. We combine a miniaturized laser unit, a low cost pixel detector and machine vision routines to realize a demonstrator for a Michelson type micro interferometer. We demonstrate a low cost sensor smaller 1cm3 including all electronics and demonstrate distance sensing up to 30 cm and resolution in nm range.
Acoustic Parametric Array for Identifying Standoff Targets
NASA Astrophysics Data System (ADS)
Hinders, M. K.; Rudd, K. E.
2010-02-01
An integrated simulation method for investigating nonlinear sound beams and 3D acoustic scattering from any combination of complicated objects is presented. A standard finite-difference simulation method is used to model pulsed nonlinear sound propagation from a source to a scattering target via the KZK equation. Then, a parallel 3D acoustic simulation method based on the finite integration technique is used to model the acoustic wave interaction with the target. Any combination of objects and material layers can be placed into the 3D simulation space to study the resulting interaction. Several example simulations are presented to demonstrate the simulation method and 3D visualization techniques. The combined simulation method is validated by comparing experimental and simulation data and a demonstration of how this combined simulation method assisted in the development of a nonlinear acoustic concealed weapons detector is also presented.
Efficient level set methods for constructing wavefronts in three spatial dimensions
NASA Astrophysics Data System (ADS)
Cheng, Li-Tien
2007-10-01
Wavefront construction in geometrical optics has long faced the twin difficulties of dealing with multi-valued forms and resolution of wavefront surfaces. A recent change in viewpoint, however, has demonstrated that working in phase space on bicharacteristic strips using eulerian methods can bypass both difficulties. The level set method for interface dynamics makes a suitable choice for the eulerian method. Unfortunately, in three-dimensional space, the setting of interest for most practical applications, the advantages of this method are largely offset by a new problem: the high dimension of phase space. In this work, we present new types of level set algorithms that remove this obstacle and demonstrate their abilities to accurately construct wavefronts under high resolution. These results propel the level set method forward significantly as a competitive approach in geometrical optics under realistic conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Yiming, E-mail: yangyiming1988@outlook.com
Minor phases make considerable contributions to the mechanical and physical properties of metals and alloys. Unfortunately, it is difficult to identify unknown minor phases in a bulk polycrystalline material using conventional metallographic methods. Here, a non-destructive method based on three-dimensional X-ray diffraction (3DXRD) is developed to solve this problem. Simulation results demonstrate that this method is simultaneously able to identify minor phase grains and reveal their positions, orientations and sizes within bulk alloys. According to systematic simulations, the 3DXRD method is practicable for an extensive sample set, including polycrystalline alloys with hexagonal, orthorhombic and cubic minor phases. Experiments were alsomore » conducted to confirm the simulation results. The results for a bulk sample of aluminum alloy AA6061 show that the crystal grains of an unexpected γ-Fe (austenite) phase can be identified, three-dimensionally and nondestructively. Therefore, we conclude that the 3DXRD method is a powerful tool for the identification of unknown minor phases in bulk alloys belonging to a variety of crystal systems. This method also has the potential to be used for in situ observations of the effects of minor phases on the crystallographic behaviors of alloys. - Highlights: •A method based on 3DXRD is developed for identification of unknown minor phase. •Grain position, orientation and size, is simultaneously acquired. •A systematic simulation demonstrated the applicability of the proposed method. •Experimental results on a AA6061 sample confirmed the practicability of the method.« less
QCL spectroscopy combined with the least squares method for substance analysis
NASA Astrophysics Data System (ADS)
Samsonov, D. A.; Tabalina, A. S.; Fufurin, I. L.
2017-11-01
The article briefly describes distinctive features of quantum cascade lasers (QCL). It also describes an experimental set-up for acquiring mid-infrared absorption spectra using QCL. The paper demonstrates experimental results in the form of normed spectra. We tested the application of the least squares method for spectrum analysis. We used this method for substance identification and extraction of concentration data. We compare the results with more common methods of absorption spectroscopy. Eventually, we prove the feasibility of using this simple method for quantitative and qualitative analysis of experimental data acquired with QCL.
The effectiveness of humane teaching methods in veterinary education.
Knight, Andrew
2007-01-01
Animal use resulting in harm or death has historically played an integral role in veterinary education, in disciplines such as surgery, physiology, biochemistry, anatomy, pharmacology, and parasitology. However, many non-harmful alternatives now exist, including computer simulations, high quality videos, ''ethically-sourced cadavers'' such as from animals euthanased for medical reasons, preserved specimens, models and surgical simulators, non-invasive self-experimentation, and supervised clinical experiences. Veterinary students seeking to use such methods often face strong opposition from faculty members, who usually cite concerns about their teaching efficacy. Consequently, studies of veterinary students were reviewed comparing learning outcomes generated by non-harmful teaching methods with those achieved by harmful animal use. Of eleven published from 1989 to 2006, nine assessed surgical training--historically the discipline involving greatest harmful animal use. 45.5% (5/11) demonstrated superior learning outcomes using more humane alternatives. Another 45.5% (5/11) demonstrated equivalent learning outcomes, and 9.1% (1/11) demonstrated inferior learning outcomes. Twenty one studies of non-veterinary students in related academic disciplines were also published from 1968 to 2004. 38.1% (8/21) demonstrated superior, 52.4% (11/21) demonstrated equivalent, and 9.5% (2/21) demonstrated inferior learning outcomes using humane alternatives. Twenty nine papers in which comparison with harmful animal use did not occur illustrated additional benefits of humane teaching methods in veterinary education, including: time and cost savings, enhanced potential for customisation and repeatability of the learning exercise, increased student confidence and satisfaction, increased compliance with animal use legislation, elimination of objections to the use of purpose-killed animals, and integration of clinical perspectives and ethics early in the curriculum. The evidence demonstrates that veterinary educators can best serve their students and animals, while minimising financial and time burdens, by introducing well-designed teaching methods not reliant on harmful animal use.
2004-10-01
lactonases failed to enhance beta-hemolytic activity. The results of this study demonstrate that heterologous expression of Bacillus sp . AiiA lactonases in...results of this study demonstrate that heterologous expression of Bacillus sp . AiiA lactonases in B. thailandensis reduced AHL accumulation, affected both...hemo- lysis, and carbon utilization by the expression of Bacillus sp . AiiA lactonases in B. thailandensis. MATERIALS AND METHODS Bacterial strains and
The pedagogical toolbox: computer-generated visual displays, classroom demonstration, and lecture.
Bockoven, Jerry
2004-06-01
This analogue study compared the effectiveness of computer-generated visual displays, classroom demonstration, and traditional lecture as methods of instruction used to teach neuronal structure and processes. Randomly assigned 116 undergraduate students participated in 1 of 3 classrooms in which they experienced the same content but different teaching approaches presented by 3 different student-instructors. Then participants completed a survey of their subjective reactions and a measure of factual information designed to evaluate objective learning outcomes. Participants repeated this factual measure 5 wk. later. Results call into question the use of classroom demonstration methods as well as the trend towards devaluing traditional lecture in favor of computer-generated visual display.
Parallel, adaptive finite element methods for conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.
1994-01-01
We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.
Fractal analysis of GPS time series for early detection of disastrous seismic events
NASA Astrophysics Data System (ADS)
Filatov, Denis M.; Lyubushin, Alexey A.
2017-03-01
A new method of fractal analysis of time series for estimating the chaoticity of behaviour of open stochastic dynamical systems is developed. The method is a modification of the conventional detrended fluctuation analysis (DFA) technique. We start from analysing both methods from the physical point of view and demonstrate the difference between them which results in a higher accuracy of the new method compared to the conventional DFA. Then, applying the developed method to estimate the measure of chaoticity of a real dynamical system - the Earth's crust, we reveal that the latter exhibits two distinct mechanisms of transition to a critical state: while the first mechanism has already been known due to numerous studies of other dynamical systems, the second one is new and has not previously been described. Using GPS time series, we demonstrate efficiency of the developed method in identification of critical states of the Earth's crust. Finally we employ the method to solve a practically important task: we show how the developed measure of chaoticity can be used for early detection of disastrous seismic events and provide a detailed discussion of the numerical results, which are shown to be consistent with outcomes of other researches on the topic.
40 CFR 63.4941 - How do I demonstrate initial compliance with the emission limitations?
Code of Federal Regulations, 2010 CFR
2010-07-01
... know whether the blend is aliphatic or aromatic. However, if the results of a Method 311 test indicate... (a)(1) through (5) of this section. (1) Method 311 (appendix A to 40 CFR part 63). You may use Method...)(i) and (ii) of this section when performing a Method 311 test. (i) Count each organic HAP that is...
NASA Astrophysics Data System (ADS)
Karcıoğlu Karakaş, Zeynep; Boncukçuoğlu, Recep; Karakaş, İbrahim H.
2016-04-01
In this study, it was investigated the effects of the used fuels on structural, morphological and magnetic properties of nanoparticles in nanoparticle synthesis with microwave assisted combustion method with an important method in quick, simple and low cost at synthesis of the nanoparticles. In this aim, glycine, urea and citric acid were used as fuel, respectively. The synthesised nanoparticles were characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM), Brunauer-Emmet-Teller surface area (BET), and vibrating sample magnetometry (VSM) techniques. We observed that fuel type is quite effective on magnetic properties and surface properties of the nanoparticles. X-ray difractograms of the obtained nanoparticles were compared with standard powder diffraction cards of NiFe2O4 (JCPDS Card Number 54-0964). The results demonstrated that difractograms are fully compatible with standard reflection peaks. According to the results of the XRD analysis, the highest crystallinity was observed at nanoparticles synthesized with glycine. The results demonstrated that the nanoparticles prepared with urea has the highest surface area. The micrographs of SEM showed that all of the nanoparticles have nano-crystalline behaviour and particles indication cubic shape. VSM analysis demonstrated that the type of fuel used for synthesis is highly effective a parameter on magnetic properties of nanoparticles.
Non-linguistic learning in aphasia: Effects of training method and stimulus characteristics
Vallila-Rohter, Sofia; Kiran, Swathi
2013-01-01
Purpose The purpose of the current study was to explore non-linguistic learning ability in patients with aphasia, examining the impact of stimulus typicality and feedback on success with learning. Method Eighteen patients with aphasia and eight healthy controls participated in this study. All participants completed four computerized, non-linguistic category-learning tasks. We probed learning ability under two methods of instruction: feedback-based (FB) and paired-associate (PA). We also examined the impact of task complexity on learning ability, comparing two stimulus conditions: typical (Typ) and atypical (Atyp). Performance was compared between groups and across conditions. Results Results demonstrated that healthy controls were able to successfully learn categories under all conditions. For our patients with aphasia, two patterns of performance arose. One subgroup of patients was able to maintain learning across task manipulations and conditions. The other subgroup of patients demonstrated a sensitivity to task complexity, learning successfully only in the typical training conditions. Conclusions Results support the hypothesis that impairments of general learning are present in aphasia. Some patients demonstrated the ability to extract category information under complex training conditions, while others learned only under conditions that were simplified and emphasized salient category features. Overall, the typical training condition facilitated learning for all participants. Findings have implications for therapy, which are discussed. PMID:23695914
NASA Technical Reports Server (NTRS)
Klein, V.
1979-01-01
Two identification methods, the equation error method and the output error method, are used to estimate stability and control parameter values from flight data for a low-wing, single-engine, general aviation airplane. The estimated parameters from both methods are in very good agreement primarily because of sufficient accuracy of measured data. The estimated static parameters also agree with the results from steady flights. The effect of power different input forms are demonstrated. Examination of all results available gives the best values of estimated parameters and specifies their accuracies.
The Activity of Antimicrobial Surfaces Varies by Testing Protocol Utilized
Campos, Matias D.; Zucchi, Paola C.; Phung, Ann; Leonard, Steven N.; Hirsch, Elizabeth B.
2016-01-01
Background Contaminated hospital surfaces are an important source of nosocomial infections. A major obstacle in marketing antimicrobial surfaces is a lack of efficacy data based on standardized testing protocols. Aim We compared the efficacy of multiple testing protocols against several “antimicrobial” film surfaces. Methods Four clinical isolates were used: one Escherichia coli, one Klebsiella pneumoniae, and two Staphylococcus aureus strains. Two industry methods (modified ISO 22196 and ASTM E2149), a “dried droplet”, and a “transfer” method were tested against two commercially available antimicrobial films, one film in development, an untreated control, and a positive (silver) control film. At 2 (only ISO) and 24 hours following inoculation, bacteria were collected from film surfaces and enumerated. Results Compared to untreated films in all protocols, there were no significant differences in recovery on either commercial brand at 2 or 24 hours after inoculation. The silver surface demonstrated significant microbicidal activity (mean loss 4.9 Log10 CFU/ml) in all methods and time points with the exception of 2 hours in the ISO protocol and the transfer method. Using our novel droplet method, no differences between placebo and active surfaces were detected. The surface in development demonstrated variable activity depending on method, organism, and time point. The ISO demonstrated minimal activity at 2 hours but significant activity at 24 hours (mean 4.5 Log10 CFU/ml difference versus placebo). The ASTEM protocol exhibited significant differences in recovery of staphylococci (mean 5 Log10 CFU/ml) but not Gram-negative isolates (10 fold decrease). Minimal activity was observed with this film in the transfer method. Conclusions Varying results between protocols suggested that efficacy of antimicrobial surfaces cannot be easily and reproducibly compared. Clinical use should be considered and further development of representative methods is needed. PMID:27494336
Liquid-cooling technology for gas turbines review and status
NASA Technical Reports Server (NTRS)
Vanfossen, G. J., Jr.; Stepka, F. S.
1978-01-01
A review of research related to liquid cooling of gas turbines was conducted and an assessment of the state of the art was made. Various methods of liquid cooling turbines were reviewed. Examples and results with test and demonstrator turbines utilizing these methods along with the advantages and disadvantages of the various methods are discussed.
Elsawy, Amr S; Eldawlatly, Seif; Taher, Mohamed; Aly, Gamal M
2014-01-01
The current trend to use Brain-Computer Interfaces (BCIs) with mobile devices mandates the development of efficient EEG data processing methods. In this paper, we demonstrate the performance of a Principal Component Analysis (PCA) ensemble classifier for P300-based spellers. We recorded EEG data from multiple subjects using the Emotiv neuroheadset in the context of a classical oddball P300 speller paradigm. We compare the performance of the proposed ensemble classifier to the performance of traditional feature extraction and classifier methods. Our results demonstrate the capability of the PCA ensemble classifier to classify P300 data recorded using the Emotiv neuroheadset with an average accuracy of 86.29% on cross-validation data. In addition, offline testing of the recorded data reveals an average classification accuracy of 73.3% that is significantly higher than that achieved using traditional methods. Finally, we demonstrate the effect of the parameters of the P300 speller paradigm on the performance of the method.
44 CFR 67.6 - Basis of appeal.
Code of Federal Regulations, 2014 CFR
2014-10-01
... technically incorrect. Because scientific and technical correctness is often a matter of degree rather than...), appellants are required to demonstrate that alternative methods or applications result in more correct... due to error in application of hydrologic, hydraulic or other methods or use of inferior data in...
44 CFR 67.6 - Basis of appeal.
Code of Federal Regulations, 2012 CFR
2012-10-01
... technically incorrect. Because scientific and technical correctness is often a matter of degree rather than...), appellants are required to demonstrate that alternative methods or applications result in more correct... due to error in application of hydrologic, hydraulic or other methods or use of inferior data in...
44 CFR 67.6 - Basis of appeal.
Code of Federal Regulations, 2013 CFR
2013-10-01
... technically incorrect. Because scientific and technical correctness is often a matter of degree rather than...), appellants are required to demonstrate that alternative methods or applications result in more correct... due to error in application of hydrologic, hydraulic or other methods or use of inferior data in...
44 CFR 67.6 - Basis of appeal.
Code of Federal Regulations, 2011 CFR
2011-10-01
... technically incorrect. Because scientific and technical correctness is often a matter of degree rather than...), appellants are required to demonstrate that alternative methods or applications result in more correct... due to error in application of hydrologic, hydraulic or other methods or use of inferior data in...
Development of Gold Standard Ion-Selective Electrode-Based Methods for Fluoride Analysis
Martínez-Mier, E.A.; Cury, J.A.; Heilman, J.R.; Katz, B.P.; Levy, S.M.; Li, Y.; Maguire, A.; Margineda, J.; O’Mullane, D.; Phantumvanit, P.; Soto-Rojas, A.E.; Stookey, G.K.; Villa, A.; Wefel, J.S.; Whelton, H.; Whitford, G.M.; Zero, D.T.; Zhang, W.; Zohouri, V.
2011-01-01
Background/Aims: Currently available techniques for fluoride analysis are not standardized. Therefore, this study was designed to develop standardized methods for analyzing fluoride in biological and nonbiological samples used for dental research. Methods A group of nine laboratories analyzed a set of standardized samples for fluoride concentration using their own methods. The group then reviewed existing analytical techniques for fluoride analysis, identified inconsistencies in the use of these techniques and conducted testing to resolve differences. Based on the results of the testing undertaken to define the best approaches for the analysis, the group developed recommendations for direct and microdiffusion methods using the fluoride ion-selective electrode. Results Initial results demonstrated that there was no consensus regarding the choice of analytical techniques for different types of samples. Although for several types of samples, the results of the fluoride analyses were similar among some laboratories, greater differences were observed for saliva, food and beverage samples. In spite of these initial differences, precise and true values of fluoride concentration, as well as smaller differences between laboratories, were obtained once the standardized methodologies were used. Intraclass correlation coefficients ranged from 0.90 to 0.93, for the analysis of a certified reference material, using the standardized methodologies. Conclusion The results of this study demonstrate that the development and use of standardized protocols for F analysis significantly decreased differences among laboratories and resulted in more precise and true values. PMID:21160184
2014-01-01
Background The use of radio frequency identification (RFID) systems in healthcare is increasing, and concerns for electromagnetic compatibility (EMC) pose one of the biggest obstacles for widespread adoption. Numerous studies have demonstrated that RFID systems can interfere with medical devices; however, the majority of past studies relied on time-consuming and burdensome test schemes based on ad hoc test methods applied to individual RFID systems. Methods This paper presents the results of using an RFID simulator that allows for faster evaluation of RFID-medical device EMC against a library of RFID test signals at various field strengths. Results The results of these tests demonstrate the feasibility and adequacy of simulator testing and can be used to support its incorporation into applicable consensus standards. Conclusions This work can aid the medical device community in better assessing the risks associated with medical device exposure to RFID. PMID:25086451
Quirós, Elia; Felicísimo, Angel M; Cuartero, Aurora
2009-01-01
This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test.
Agin, Patricia Poh; Edmonds, Susan H
2002-08-01
The goals of this study were (i) to demonstrate that existing and widely used sun protection factor (SPF) test methodologies can produce accurate and reproducible results for high SPF formulations and (ii) to provide data on the number of test-subjects needed, the variability of the data, and the appropriate exposure increments needed for testing high SPF formulations. Three high SPF formulations were tested, according to the Food and Drug Administration's (FDA) 1993 tentative final monograph (TFM) 'very water resistant' test method and/or the 1978 proposed monograph 'waterproof' test method, within one laboratory. A fourth high SPF formulation was tested at four independent SPF testing laboratories, using the 1978 waterproof SPF test method. All laboratories utilized xenon arc solar simulators. The data illustrate that the testing conducted within one laboratory, following either the 1978 proposed or the 1993 TFM SPF test method, was able to reproducibly determine the SPFs of the formulations tested, using either the statistical analysis method in the proposed monograph or the statistical method described in the TFM. When one formulation was tested at four different laboratories, the anticipated variation in the data owing to the equipment and other operational differences was minimized through the use of the statistical method described in the 1993 monograph. The data illustrate that either the 1978 proposed monograph SPF test method or the 1993 TFM SPF test method can provide accurate and reproducible results for high SPF formulations. Further, these results can be achieved with panels of 20-25 subjects with an acceptable level of variability. Utilization of the statistical controls from the 1993 sunscreen monograph can help to minimize lab-to-lab variability for well-formulated products.
Accurate motion parameter estimation for colonoscopy tracking using a regression method
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.
2010-03-01
Co-located optical and virtual colonoscopy images have the potential to provide important clinical information during routine colonoscopy procedures. In our earlier work, we presented an optical flow based algorithm to compute egomotion from live colonoscopy video, permitting navigation and visualization of the corresponding patient anatomy. In the original algorithm, motion parameters were estimated using the traditional Least Sum of squares(LS) procedure which can be unstable in the context of optical flow vectors with large errors. In the improved algorithm, we use the Least Median of Squares (LMS) method, a robust regression method for motion parameter estimation. Using the LMS method, we iteratively analyze and converge toward the main distribution of the flow vectors, while disregarding outliers. We show through three experiments the improvement in tracking results obtained using the LMS method, in comparison to the LS estimator. The first experiment demonstrates better spatial accuracy in positioning the virtual camera in the sigmoid colon. The second and third experiments demonstrate the robustness of this estimator, resulting in longer tracked sequences: from 300 to 1310 in the ascending colon, and 410 to 1316 in the transverse colon.
Automated Geometry assisted PEC for electron beam direct write nanolithography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ocola, Leonidas E.; Gosztola, David J.; Rosenmann, Daniel
Nanoscale geometry assisted proximity effect correction (NanoPEC) is demonstrated to improve PEC for nanoscale structures over standard PEC, in terms of feature sharpness for sub-100 nm structures. The method was implemented onto an existing commercially available PEC software. Plasmonic arrays of crosses were fabricated using regular PEC and NanoPEC, and optical absorbance was measured. Results confirm that the improved sharpness of the structures leads to increased sharpness in the optical absorbance spectrum features. We also demonstrated that this method of PEC is applicable to arbitrary shaped structures beyond crosses.
Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.
2016-01-01
A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.
DuPont qualicon BAX system real-time PCR assay for Escherichia coli O157:H7.
Burns, Frank; Fleck, Lois; Andaloro, Bridget; Davis, Eugene; Rohrbeck, Jeff; Tice, George; Wallace, Morgan
2011-01-01
Evaluations were conducted to test the performance of the BAX System Real-Time PCR assay, which was certified as Performance Tested Method 031002 for screening E. coli O157:H7 in ground beef, beef trim, spinach, and lettuce. Method comparison studies performed on samples with low-level inoculates showed that the BAX System demonstrates a sensitivity equivalent or superior to the FDA-BAM and the USDA-FSIS culture methods, but with a significantly shorter time to result. Tests to evaluate inclusivity and exclusivity returned no false-negative and no false-positive results on a diverse panel of isolates, and tests for lot-to-lot variability and tablet stability demonstrated consistent performance. Ruggedness studies determined that none of the factors examined affect the performance of the assay. An accelerated shelf life study determined an initial 36 month shelf life for the test kit.
Wang, Rui; Zhang, Fang; Wang, Liu; Qian, Wenjuan; Qian, Cheng; Wu, Jian; Ying, Yibin
2017-04-18
On-site monitoring the plantation of genetically modified (GM) crops is of critical importance in agriculture industry throughout the world. In this paper, a simple, visual and instrument-free method for instant on-site detection of GTS 40-3-2 soybean has been developed. It is based on body-heat recombinase polymerase amplification (RPA) and followed with naked-eye detection via fluorescent DNA dye. Combining with extremely simplified sample preparation, the whole detection process can be accomplished within 10 min and the fluorescent results can be photographed by an accompanied smart phone. Results demonstrated a 100% detection rate for screening of practical GTS 40-3-2 soybean samples by 20 volunteers under different ambient temperatures. This method is not only suitable for on-site detection of GM crops but also demonstrates great potential to be applied in other fields.
Phase Space Approach to Dynamics of Interacting Fermions
NASA Astrophysics Data System (ADS)
Davidson, Shainen; Sels, Dries; Kasper, Valentin; Polkovnikov, Anatoli
Understanding the behavior of interacting fermions is of fundamental interest in many fields ranging from condensed matter to high energy physics. Developing numerically efficient and accurate simulation methods is an indispensable part of this. Already in equilibrium, fermions are notoriously hard to handle due to the sign problem. Out of equilibrium, an important outstanding problem is the efficient numerical simulation of the dynamics of these systems. In this work we develop a new semiclassical phase-space approach (a.k.a. the truncated Wigner approximation) for simulating the dynamics of interacting lattice fermions in arbitrary dimensions. We demonstrate the strength of the method by comparing the results to exact diagonalization (ED) on small 1D and 2D systems. We furthermore present results on Many-Body Localized (MBL) systems in 1D and 2D, and demonstrate how the method can be used to determine the MBL transition.
Nano-based sensor for assessment of weaponry structural degradation
NASA Astrophysics Data System (ADS)
Brantley, Christina L.; Edwards, Eugene; Ruffin, Paul B.; Kranz, Michael
2016-04-01
Missiles and weaponry-based systems are composed of metal structures that can degrade after prolonged exposure to environmental elements. A particular concern is accumulation of corrosion that generally results from prolonged environmental exposure. Corrosion, defined as the unintended destruction or deterioration of a material due to its interaction with the environment, can negatively affect both equipment and infrastructure. System readiness and safety can be reduced if corrosion is not detected, prevented and managed. The current corrosion recognition methods (Visual, Radiography, Ultrasonics, Eddy Current, and Thermography) are expensive and potentially unreliable. Visual perception is the most commonly used method for determining corrosion in metal. Utilization of an inductance-based sensor system is being proposed as part of the authors' research. Results from this research will provide a more efficient, economical, and non-destructive sensing approach. Preliminary results demonstrate a highly linear degradation within a corrosive environment due to the increased surface area available on the sensor coupon. The inductance of the devices, which represents a volume property of the coupon, demonstrated sensitivity to corrosion levels. The proposed approach allows a direct mass-loss measurement based on the change in the inductance of the coupon when placed in an alternating magnetic field. Prototype devices have demonstrated highly predictable corrosion rates that are easily measured using low-power small electronic circuits and energy harvesting methods to interrogate the sensor. Preliminary testing demonstrates that the device concept is acceptable and future opportunities for use in low power embedded applications are achievable. Key results in this paper include the assessment of typical Army corrosion cost, degradation patterns of varying metal materials, and application of wireless sensors elements.
A Lagrangian meshfree method applied to linear and nonlinear elasticity.
Walker, Wade A
2017-01-01
The repeated replacement method (RRM) is a Lagrangian meshfree method which we have previously applied to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we apply the enhanced method to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical methods such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other methods to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code.
Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.
2002-01-01
Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.
Eye-motion-corrected optical coherence tomography angiography using Lissajous scanning.
Chen, Yiwei; Hong, Young-Joo; Makita, Shuichi; Yasuno, Yoshiaki
2018-03-01
To correct eye motion artifacts in en face optical coherence tomography angiography (OCT-A) images, a Lissajous scanning method with subsequent software-based motion correction is proposed. The standard Lissajous scanning pattern is modified to be compatible with OCT-A and a corresponding motion correction algorithm is designed. The effectiveness of our method was demonstrated by comparing en face OCT-A images with and without motion correction. The method was further validated by comparing motion-corrected images with scanning laser ophthalmoscopy images, and the repeatability of the method was evaluated using a checkerboard image. A motion-corrected en face OCT-A image from a blinking case is presented to demonstrate the ability of the method to deal with eye blinking. Results show that the method can produce accurate motion-free en face OCT-A images of the posterior segment of the eye in vivo .
A Lagrangian meshfree method applied to linear and nonlinear elasticity
2017-01-01
The repeated replacement method (RRM) is a Lagrangian meshfree method which we have previously applied to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we apply the enhanced method to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical methods such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other methods to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code. PMID:29045443
An accelerated subspace iteration for eigenvector derivatives
NASA Technical Reports Server (NTRS)
Ting, Tienko
1991-01-01
An accelerated subspace iteration method for calculating eigenvector derivatives has been developed. Factors affecting the effectiveness and the reliability of the subspace iteration are identified, and effective strategies concerning these factors are presented. The method has been implemented, and the results of a demonstration problem are presented.
Correlates of the Rosenberg Self-Esteem Scale Method Effects
ERIC Educational Resources Information Center
Quilty, Lena C.; Oakman, Jonathan M.; Risko, Evan
2006-01-01
Investigators of personality assessment are becoming aware that using positively and negatively worded items in questionnaires to prevent acquiescence may negatively impact construct validity. The Rosenberg Self-Esteem Scale (RSES) has demonstrated a bifactorial structure typically proposed to result from these method effects. Recent work suggests…
Fully electronic urine dipstick probe for combinatorial detection of inflammatory biomarkers
Kamakoti, Vikramshankar; Kinnamon, David; Choi, Kang Hyeok; Jagannath, Badrinath; Prasad, Shalini
2018-01-01
Aim: An electrochemical urine dipstick probe biosensor has been demonstrated using molybdenum electrodes on nanoporous polyamide substrate for the quantitative detection of two inflammatory protein biomarkers, CRP and IL-6. Materials & methods: The electrode interface was characterized using ζ-potential and Fourier transform infrared spectroscopy. Detection of biomarkers was demonstrated by measuring impedance changes associated with the dose concentrations of the two biomarkers. A proof of feasibility of point-of-care implementation of the biosensor was demonstrated using a portable electronics platform. Results & conclusion: Limit of detection of 1 pg/ml was achieved for CRP and IL-6 in human urine and synthetic urine buffers. The developed portable hardware demonstrated close correlation with benchtop equipment results. PMID:29796304
NASA Astrophysics Data System (ADS)
Retheesh, R.; Ansari, Md. Zaheer; Radhakrishnan, P.; Mujeeb, A.
2018-03-01
This study demonstrates the feasibility of a view-based method, the motion history image (MHI) to map biospeckle activity around the scar region in a green orange fruit. The comparison of MHI with the routine intensity-based methods validated the effectiveness of the proposed method. The results show that MHI can be implementated as an alternative online image processing tool in the biospeckle analysis.
[Fractal research of neurite growth in immunofluorescent images].
Tang, Min; Wang, Huinan
2008-12-01
Fractal dimension has been widely used in medical images processing and analysis. The neurite growth of cultured dorsal root ganglion (DRG) was detected by fluorescent immunocytochemistry treated with nerve regeneration factor (0.1, 0.5, 2.0 mg/L). A novel method based on triangular prism surface area (TPSA) was introduced and adopted to calculate the fractal dimension of the two-dimensional immunofluorescent images. Experimental results demonstrate that this method is easy to understand and convenient to operate, and the quantititve results are concordant with the observational findings under microscope. This method can be guidelines for analyzing and deciding experimental results.
Sample preparation of metal alloys by electric discharge machining
NASA Technical Reports Server (NTRS)
Chapman, G. B., II; Gordon, W. A.
1976-01-01
Electric discharge machining was investigated as a noncontaminating method of comminuting alloys for subsequent chemical analysis. Particulate dispersions in water were produced from bulk alloys at a rate of about 5 mg/min by using a commercially available machining instrument. The utility of this approach was demonstrated by results obtained when acidified dispersions were substituted for true acid solutions in an established spectrochemical method. The analysis results were not significantly different for the two sample forms. Particle size measurements and preliminary results from other spectrochemical methods which require direct aspiration of liquid into flame or plasma sources are reported.
TECHNOLOGIES FOR MONITORING AND MEASUREMENT ...
A demonstration of technologies for determining the presence of dioxin and dioxin-like compounds in soil and sediment was conducted under EPA's Superfund Innovative Technology Evaluation Program in Saginaw, Michigan in April 2004. This report describes the performance evaluation of the Abraxis LLC Coplanar PCB Enzyme-Linked Immunosorbent Assay (ELISA) kit. The kit is an immunoassay technique that reports the total toxicity equivalents (TEQ) of polychlorinated biphenyls (PCBs). The technology results were compared to high resolution mass spectrometry TEQ results generated using EPA Method 1668A.Abraxis generally reported data that were higher than the reference laboratory TEQPCB values, with the exception of ultra-high level PCB samples [> 10,000 picogram/gram (pg/g) TEQ] where Abraxis reported values lower than the reference method. The technologys estimated MDL was 6 to 31 pg/g TEQPCB. Results from this demonstration suggest that the Abraxis kit could be an effective screening tool for screening sample concentrations above and below 50 pg/g TEQPCB, particularly considering that the cost ($22,668 vs. $184,449) and the time to analyze the 209 demonstration samples were significantly less than those of the reference laboratory. The objective of this program is to promote the acceptance and use of innovative field technologies by providing well-documented performance and cost data obtained from field demonstrations.
Development of deterministic transport methods for low energy neutrons for shielding in space
NASA Technical Reports Server (NTRS)
Ganapol, Barry
1993-01-01
Transport of low energy neutrons associated with the galactic cosmic ray cascade is analyzed in this dissertation. A benchmark quality analytical algorithm is demonstrated for use with BRYNTRN, a computer program written by the High Energy Physics Division of NASA Langley Research Center, which is used to design and analyze shielding against the radiation created by the cascade. BRYNTRN uses numerical methods to solve the integral transport equations for baryons with the straight-ahead approximation, and numerical and empirical methods to generate the interaction probabilities. The straight-ahead approximation is adequate for charged particles, but not for neutrons. As NASA Langley improves BRYNTRN to include low energy neutrons, a benchmark quality solution is needed for comparison. The neutron transport algorithm demonstrated in this dissertation uses the closed-form Green's function solution to the galactic cosmic ray cascade transport equations to generate a source of neutrons. A basis function expansion for finite heterogeneous and semi-infinite homogeneous slabs with multiple energy groups and isotropic scattering is used to generate neutron fluxes resulting from the cascade. This method, called the FN method, is used to solve the neutral particle linear Boltzmann transport equation. As a demonstration of the algorithm coded in the programs MGSLAB and MGSEMI, neutron and ion fluxes are shown for a beam of fluorine ions at 1000 MeV per nucleon incident on semi-infinite and finite aluminum slabs. Also, to demonstrate that the shielding effectiveness against the radiation from the galactic cosmic ray cascade is not directly proportional to shield thickness, a graph of transmitted total neutron scalar flux versus slab thickness is shown. A simple model based on the nuclear liquid drop assumption is used to generate cross sections for the galactic cosmic ray cascade. The ENDF/B V database is used to generate the total and scattering cross sections for neutrons in aluminum. As an external verification, the results from MGSLAB and MGSEMI were compared to ANISN/PC, a routinely used neutron transport code, showing excellent agreement. In an application to an aluminum shield, the FN method seems to generate reasonable results.
NASA Astrophysics Data System (ADS)
Taneja, Ankur; Higdon, Jonathan
2018-01-01
A high-order spectral element discontinuous Galerkin method is presented for simulating immiscible two-phase flow in petroleum reservoirs. The governing equations involve a coupled system of strongly nonlinear partial differential equations for the pressure and fluid saturation in the reservoir. A fully implicit method is used with a high-order accurate time integration using an implicit Rosenbrock method. Numerical tests give the first demonstration of high order hp spatial convergence results for multiphase flow in petroleum reservoirs with industry standard relative permeability models. High order convergence is shown formally for spectral elements with up to 8th order polynomials for both homogeneous and heterogeneous permeability fields. Numerical results are presented for multiphase fluid flow in heterogeneous reservoirs with complex geometric or geologic features using up to 11th order polynomials. Robust, stable simulations are presented for heterogeneous geologic features, including globally heterogeneous permeability fields, anisotropic permeability tensors, broad regions of low-permeability, high-permeability channels, thin shale barriers and thin high-permeability fractures. A major result of this paper is the demonstration that the resolution of the high order spectral element method may be exploited to achieve accurate results utilizing a simple cartesian mesh for non-conforming geological features. Eliminating the need to mesh to the boundaries of geological features greatly simplifies the workflow for petroleum engineers testing multiple scenarios in the face of uncertainty in the subsurface geology.
Trend Change Detection in NDVI Time Series: Effects of Inter-Annual Variability and Methodology
NASA Technical Reports Server (NTRS)
Forkel, Matthias; Carvalhais, Nuno; Verbesselt, Jan; Mahecha, Miguel D.; Neigh, Christopher S.R.; Reichstein, Markus
2013-01-01
Changing trends in ecosystem productivity can be quantified using satellite observations of Normalized Difference Vegetation Index (NDVI). However, the estimation of trends from NDVI time series differs substantially depending on analyzed satellite dataset, the corresponding spatiotemporal resolution, and the applied statistical method. Here we compare the performance of a wide range of trend estimation methods and demonstrate that performance decreases with increasing inter-annual variability in the NDVI time series. Trend slope estimates based on annual aggregated time series or based on a seasonal-trend model show better performances than methods that remove the seasonal cycle of the time series. A breakpoint detection analysis reveals that an overestimation of breakpoints in NDVI trends can result in wrong or even opposite trend estimates. Based on our results, we give practical recommendations for the application of trend methods on long-term NDVI time series. Particularly, we apply and compare different methods on NDVI time series in Alaska, where both greening and browning trends have been previously observed. Here, the multi-method uncertainty of NDVI trends is quantified through the application of the different trend estimation methods. Our results indicate that greening NDVI trends in Alaska are more spatially and temporally prevalent than browning trends. We also show that detected breakpoints in NDVI trends tend to coincide with large fires. Overall, our analyses demonstrate that seasonal trend methods need to be improved against inter-annual variability to quantify changing trends in ecosystem productivity with higher accuracy.
Rapid detection of mecA and spa by the loop-mediated isothermal amplification (LAMP) method.
Koide, Y; Maeda, H; Yamabe, K; Naruishi, K; Yamamoto, T; Kokeguchi, S; Takashiba, S
2010-04-01
To develop a detection assay for staphylococcal mecA and spa by using loop-mediated isothermal amplification (LAMP) method. Staphylococcus aureus and other related species were subjected to the detection of mecA and spa by both PCR and LAMP methods. The LAMP successfully amplified the genes under isothermal conditions at 64 degrees C within 60 min, and demonstrated identical results with the conventional PCR methods. The detection limits of the LAMP for mecA and spa, by gel electrophoresis, were 10(2) and 10 cells per tube, respectively. The naked-eye inspections were possible with 10(3) and 10 cells for detection of mecA and spa, respectively. The LAMP method was then applied to sputum and dental plaque samples. The LAMP and PCR demonstrated identical results for the plaque samples, although frequency in detection of mecA and spa by the LAMP was relatively lower for the sputum samples when compared to the PCR methods. Application of the LAMP enabled a rapid detection assay for mecA and spa. The assay may be applicable to clinical plaque samples. The LAMP offers an alternative detection assay for mecA and spa with a great advantage of the rapidity.
NASA Technical Reports Server (NTRS)
Price J. M.; Ortega, R.
1998-01-01
Probabilistic method is not a universally accepted approach for the design and analysis of aerospace structures. The validity of this approach must be demonstrated to encourage its acceptance as it viable design and analysis tool to estimate structural reliability. The objective of this Study is to develop a well characterized finite population of similar aerospace structures that can be used to (1) validate probabilistic codes, (2) demonstrate the basic principles behind probabilistic methods, (3) formulate general guidelines for characterization of material drivers (such as elastic modulus) when limited data is available, and (4) investigate how the drivers affect the results of sensitivity analysis at the component/failure mode level.
Least Squares Moving-Window Spectral Analysis.
Lee, Young Jong
2017-08-01
Least squares regression is proposed as a moving-windows method for analysis of a series of spectra acquired as a function of external perturbation. The least squares moving-window (LSMW) method can be considered an extended form of the Savitzky-Golay differentiation for nonuniform perturbation spacing. LSMW is characterized in terms of moving-window size, perturbation spacing type, and intensity noise. Simulation results from LSMW are compared with results from other numerical differentiation methods, such as single-interval differentiation, autocorrelation moving-window, and perturbation correlation moving-window methods. It is demonstrated that this simple LSMW method can be useful for quantitative analysis of nonuniformly spaced spectral data with high frequency noise.
Computational simulation of concurrent engineering for aerospace propulsion systems
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Singhal, S. N.
1992-01-01
Results are summarized of an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulations methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties - fundamental in developing such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering for propulsion systems and systems in general. Benefits and facets needing early attention in the development are outlined.
Computational simulation for concurrent engineering of aerospace propulsion systems
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Singhal, S. N.
1993-01-01
Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.
A radial basis function Galerkin method for inhomogeneous nonlocal diffusion
Lehoucq, Richard B.; Rowe, Stephen T.
2016-02-01
We introduce a discretization for a nonlocal diffusion problem using a localized basis of radial basis functions. The stiffness matrix entries are assembled by a special quadrature routine unique to the localized basis. Combining the quadrature method with the localized basis produces a well-conditioned, sparse, symmetric positive definite stiffness matrix. We demonstrate that both the continuum and discrete problems are well-posed and present numerical results for the convergence behavior of the radial basis function method. As a result, we explore approximating the solution to anisotropic differential equations by solving anisotropic nonlocal integral equations using the radial basis function method.
Computational simulation for concurrent engineering of aerospace propulsion systems
NASA Astrophysics Data System (ADS)
Chamis, C. C.; Singhal, S. N.
1993-02-01
Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.
Hassett, Brenna R
2014-03-01
Linear enamel hypoplasia (LEH), the presence of linear defects of dental enamel formed during periods of growth disruption, is frequently analyzed in physical anthropology as evidence for childhood health in the past. However, a wide variety of methods for identifying and interpreting these defects in archaeological remains exists, preventing easy cross-comparison of results from disparate studies. This article compares a standard approach to identifying LEH using the naked eye to the evidence of growth disruption observed microscopically from the enamel surface. This comparison demonstrates that what is interpreted as evidence of growth disruption microscopically is not uniformly identified with the naked eye, and provides a reference for the level of consistency between the number and timing of defects identified using microscopic versus macroscopic approaches. This is done for different tooth types using a large sample of unworn permanent teeth drawn from several post-medieval London burial assemblages. The resulting schematic diagrams showing where macroscopic methods achieve more or less similar results to microscopic methods are presented here and clearly demonstrate that "naked-eye" methods of identifying growth disruptions do not identify LEH as often as microscopic methods in areas where perikymata are more densely packed. Copyright © 2013 Wiley Periodicals, Inc.
Effect of reconstructive vascular surgery on red cell deformability--preliminary results.
Irwin, S T; Rocks, M J; McGuigan, J A; Patterson, C C; Morris, T C; O'Reilly, M J
1983-01-01
Using a simple filtration method, red cell deformability was measured in healthy control subjects and in patients with peripheral vascular disease. Impaired red cell deformability was demonstrated in patients with rest pain or gangrene and in patients with intermittent claudication. An improvement in red cell deformability was demonstrated after successful reconstructive vascular surgery in both patient groups. An improvement in red cell deformability was demonstrated in patients undergoing major limb amputation. PMID:6619311
Content and strategy in syllogistic reasoning.
Marrero, Hipólito; Elena, Gámez
2004-09-01
Syllogistic reasoning has been investigated as a general deductive process (Johnson-Laird & Byrne, 1991; Revlis, 1975; Rips, 1994). However, several studies have demonstrated the role of cognitive strategies in this type of reasoning. These strategies focus on the method used by the participants (Ford, 1995; Gilhooly, Logie, Wetherick, & Wynn, 1993) and strategies related to different interpretations of the quantified premises (Roberts, Newstead, & Griggs, 2001). In this paper, we propose that content (as well as individual cognitive differences) is an important factor in inducing a certain strategy or method for syllogistic resolution. Specifically, we suggest that syllogisms with a causal conditional premise that can be extended by an agency premise induce the use of a conditional method. To demonstrate this, we carried out two experiments. Experiment 1 provided evidence that this type of syllogism leads participants to draw the predicted conditional conclusions, in contrast with control content syllogisms. In Experiment 2, we demonstrated that the drawing of conditional conclusions is based on a causal conditional to an agent representation of the syllogism premises. These results support the role of content as inducing a particular strategy for syllogistic resolution. The implications of these results are discussed.
Policy Gradient Adaptive Dynamic Programming for Data-Based Optimal Control.
Luo, Biao; Liu, Derong; Wu, Huai-Ning; Wang, Ding; Lewis, Frank L
2017-10-01
The model-free optimal control problem of general discrete-time nonlinear systems is considered in this paper, and a data-based policy gradient adaptive dynamic programming (PGADP) algorithm is developed to design an adaptive optimal controller method. By using offline and online data rather than the mathematical system model, the PGADP algorithm improves control policy with a gradient descent scheme. The convergence of the PGADP algorithm is proved by demonstrating that the constructed Q -function sequence converges to the optimal Q -function. Based on the PGADP algorithm, the adaptive control method is developed with an actor-critic structure and the method of weighted residuals. Its convergence properties are analyzed, where the approximate Q -function converges to its optimum. Computer simulation results demonstrate the effectiveness of the PGADP-based adaptive control method.
Direct Observation of Markovian Behavior of the Mechanical Unfolding of Individual Proteins
Cao, Yi; Kuske, Rachel; Li, Hongbin
2008-01-01
Single-molecule force-clamp spectroscopy is a valuable tool to analyze unfolding kinetics of proteins. Previous force-clamp spectroscopy experiments have demonstrated that the mechanical unfolding of ubiquitin deviates from the generally assumed Markovian behavior and involves the features of glassy dynamics. Here we use single molecule force-clamp spectroscopy to study the unfolding kinetics of a computationally designed fast-folding mutant of the small protein GB1, which shares a similar β-grasp fold as ubiquitin. By treating the mechanical unfolding of polyproteins as the superposition of multiple identical Poisson processes, we developed a simple stochastic analysis approach to analyze the dwell time distribution of individual unfolding events in polyprotein unfolding trajectories. Our results unambiguously demonstrate that the mechanical unfolding of NuG2 fulfills all criteria of a memoryless Markovian process. This result, in contrast with the complex mechanical unfolding behaviors observed for ubiquitin, serves as a direct experimental demonstration of the Markovian behavior for the mechanical unfolding of a protein and reveals the complexity of the unfolding dynamics among structurally similar proteins. Furthermore, we extended our method into a robust and efficient pseudo-dwell-time analysis method, which allows one to make full use of all the unfolding events obtained in force-clamp experiments without categorizing the unfolding events. This method enabled us to measure the key parameters characterizing the mechanical unfolding energy landscape of NuG2 with improved precision. We anticipate that the methods demonstrated here will find broad applications in single-molecule force-clamp spectroscopy studies for a wide range of proteins. PMID:18375518
A novel slice preparation to study medullary oromotor and autonomic circuits in vitro
Nasse, Jason S.
2014-01-01
Background The medulla is capable of controlling and modulating ingestive behavior and gastrointestinal function. These two functions, which are critical to maintaining homeostasis, are governed by an interconnected group of nuclei dispersed throughout the medulla. As such, in vitro experiments to study the neurophysiologic details of these connections have been limited by spatial constraints of conventional slice preparations. New method This study demonstrates a novel method of sectioning the medulla so that sensory, integrative, and motor nuclei that innervate the gastrointestinal tract and the oral cavity remain intact. Results: Immunohistochemical staining against choline-acetyl-transferase and dopamine-β-hydroxylase demonstrated that within a 450 μm block of tissue we are able to capture sensory, integrative and motor nuclei that are critical to oromotor and gastrointestinal function. Within slice tracing shows that axonal projections from the NST to the reticular formation and from the reticular formation to the hypoglossal motor nucleus (mXII) persist. Live-cell calcium imaging of the slice demonstrates that stimulation of either the rostral or caudal NST activates neurons throughout the NST, as well as the reticular formation and mXII. Comparison with existing methods This new method of sectioning captures a majority of the nuclei that are active when ingesting a meal. Tradition planes of section, i.e. coronal, horizontal or sagittal, contain only a limited portion of the substrate. Conclusions Our results demonstrate that both anatomical and physiologic connections of oral and visceral sensory nuclei that project to integrative and motor nuclei remain intact with this new plane of section. PMID:25196216
The direct liquefaction proof of concept program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comolli, A.G.; Lee, L.K.; Pradhan, V.R.
1995-12-31
The goal of the Proof of Concept (POC) Program is to develop Direct Coal Liquefaction and associated transitional technologies towards commercial readiness for economically producing premium liquid fuels from coal in an environmentally acceptable manner. The program focuses on developing the two-stage liquefaction (TSL) process by utilizing geographically strategic feedstocks, commercially feasible catalysts, new prototype equipment, and testing co-processing or alternate feedstocks and improved process configurations. Other high priority objectives include dispersed catalyst studies, demonstrating low rank coal liquefaction without solids deposition, improving distillate yields on a unit reactor volume basis, demonstrating ebullated bed operations while obtaining scale-up data, demonstratingmore » optimum catalyst consumption using new concepts (e.g. regeneration, cascading), producing premium products through on-line hydrotreating, demonstrating improved hydrogen utilization for low rank coals using novel heteroatom removal methods, defining and demonstrating two-stage product properties for upgrading; demonstrating efficient and economic solid separation methods, examining the merits of integrated coal cleaning, demonstrating co-processing, studying interactions between the preheater and first and second-stage reactors, improving process operability by testing and incorporating advanced equipment and instrumentation, and demonstrating operation with alternate coal feedstocks. During the past two years major PDU Proof of Concept runs were completed. POC-1 with Illinois No. 6 coal and POC-2 with Black Thunder sub-bituminous coal. Results from these operations are continuing under review and the products are being further refined and upgraded. This paper will update the results from these operations and discuss future plans for the POC program.« less
Efficient Jacobi-Gauss collocation method for solving initial value problems of Bratu type
NASA Astrophysics Data System (ADS)
Doha, E. H.; Bhrawy, A. H.; Baleanu, D.; Hafez, R. M.
2013-09-01
In this paper, we propose the shifted Jacobi-Gauss collocation spectral method for solving initial value problems of Bratu type, which is widely applicable in fuel ignition of the combustion theory and heat transfer. The spatial approximation is based on shifted Jacobi polynomials J {/n (α,β)}( x) with α, β ∈ (-1, ∞), x ∈ [0, 1] and n the polynomial degree. The shifted Jacobi-Gauss points are used as collocation nodes. Illustrative examples have been discussed to demonstrate the validity and applicability of the proposed technique. Comparing the numerical results of the proposed method with some well-known results show that the method is efficient and gives excellent numerical results.
Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe; Frouin, Frederique; Garreau, Mireille
2015-01-01
This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert.
Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe
2015-01-01
This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert. PMID:26287691
Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis; Nyulas, Csongor; Tudorache, Tania; Noy, Natalya F; Musen, Mark A
The need to examine the behavior of different user groups is a fundamental requirement when building information systems. In this paper, we present Ontology-based Decentralized Search (OBDS), a novel method to model the navigation behavior of users equipped with different types of background knowledge. Ontology-based Decentralized Search combines decentralized search, an established method for navigation in social networks, and ontologies to model navigation behavior in information networks. The method uses ontologies as an explicit representation of background knowledge to inform the navigation process and guide it towards navigation targets. By using different ontologies, users equipped with different types of background knowledge can be represented. We demonstrate our method using four biomedical ontologies and their associated Wikipedia articles. We compare our simulation results with base line approaches and with results obtained from a user study. We find that our method produces click paths that have properties similar to those originating from human navigators. The results suggest that our method can be used to model human navigation behavior in systems that are based on information networks, such as Wikipedia. This paper makes the following contributions: (i) To the best of our knowledge, this is the first work to demonstrate the utility of ontologies in modeling human navigation and (ii) it yields new insights and understanding about the mechanisms of human navigation in information networks.
NASA Astrophysics Data System (ADS)
Holmes, Timothy W.
2001-01-01
A detailed tomotherapy inverse treatment planning method is described which incorporates leakage and head scatter corrections during each iteration of the optimization process, allowing these effects to be directly accounted for in the optimized dose distribution. It is shown that the conventional inverse planning method for optimizing incident intensity can be extended to include a `concurrent' leaf sequencing operation from which the leakage and head scatter corrections are determined. The method is demonstrated using the steepest-descent optimization technique with constant step size and a least-squared error objective. The method was implemented using the MATLAB scientific programming environment and its feasibility demonstrated for 2D test cases simulating treatment delivery using a single coplanar rotation. The results indicate that this modification does not significantly affect convergence of the intensity optimization method when exposure times of individual leaves are stratified to a large number of levels (>100) during leaf sequencing. In general, the addition of aperture dependent corrections, especially `head scatter', reduces incident fluence in local regions of the modulated fan beam, resulting in increased exposure times for individual collimator leaves. These local variations can result in 5% or greater local variation in the optimized dose distribution compared to the uncorrected case. The overall efficiency of the modified intensity optimization algorithm is comparable to that of the original unmodified case.
Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis; Nyulas, Csongor; Tudorache, Tania; Noy, Natalya F.; Musen, Mark A.
2015-01-01
The need to examine the behavior of different user groups is a fundamental requirement when building information systems. In this paper, we present Ontology-based Decentralized Search (OBDS), a novel method to model the navigation behavior of users equipped with different types of background knowledge. Ontology-based Decentralized Search combines decentralized search, an established method for navigation in social networks, and ontologies to model navigation behavior in information networks. The method uses ontologies as an explicit representation of background knowledge to inform the navigation process and guide it towards navigation targets. By using different ontologies, users equipped with different types of background knowledge can be represented. We demonstrate our method using four biomedical ontologies and their associated Wikipedia articles. We compare our simulation results with base line approaches and with results obtained from a user study. We find that our method produces click paths that have properties similar to those originating from human navigators. The results suggest that our method can be used to model human navigation behavior in systems that are based on information networks, such as Wikipedia. This paper makes the following contributions: (i) To the best of our knowledge, this is the first work to demonstrate the utility of ontologies in modeling human navigation and (ii) it yields new insights and understanding about the mechanisms of human navigation in information networks. PMID:26568745
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Qili; Institute of Robotics and Automatic Information System, Nankai University, Tianjin 300071; Shirinzadeh, Bijan
2015-07-28
A novel weighing method for cells with spherical and other regular shapes is proposed in this paper. In this method, the relationship between the cell mass and the minimum aspiration pressure to immobilize the cell (referred to as minimum immobilization pressure) is derived for the first time according to static theory. Based on this relationship, a robotic cell weighing process is established using a traditional micro-injection system. Experimental results on porcine oocytes demonstrate that the proposed method is able to weigh cells at an average speed of 16.3 s/cell and with a success rate of more than 90%. The derived cellmore » mass and density are in accordance with those reported in other published results. The experimental results also demonstrated that this method is able to detect less than 1% variation of the porcine oocyte mass quantitatively. It can be conducted by a pair of traditional micropipettes and a commercial pneumatic micro-injection system, and is expected to perform robotic operation on batch cells. At present, the minimum resolution of the proposed method for measuring the cell mass can be 1.25 × 10{sup −15 }kg. Above advantages make it very appropriate for quantifying the amount of the materials injected into or moved out of the cells in the biological applications, such as nuclear enucleations and embryo microinjections.« less
Chen, Shuonan; Mar, Jessica C
2018-06-19
A fundamental fact in biology states that genes do not operate in isolation, and yet, methods that infer regulatory networks for single cell gene expression data have been slow to emerge. With single cell sequencing methods now becoming accessible, general network inference algorithms that were initially developed for data collected from bulk samples may not be suitable for single cells. Meanwhile, although methods that are specific for single cell data are now emerging, whether they have improved performance over general methods is unknown. In this study, we evaluate the applicability of five general methods and three single cell methods for inferring gene regulatory networks from both experimental single cell gene expression data and in silico simulated data. Standard evaluation metrics using ROC curves and Precision-Recall curves against reference sets sourced from the literature demonstrated that most of the methods performed poorly when they were applied to either experimental single cell data, or simulated single cell data, which demonstrates their lack of performance for this task. Using default settings, network methods were applied to the same datasets. Comparisons of the learned networks highlighted the uniqueness of some predicted edges for each method. The fact that different methods infer networks that vary substantially reflects the underlying mathematical rationale and assumptions that distinguish network methods from each other. This study provides a comprehensive evaluation of network modeling algorithms applied to experimental single cell gene expression data and in silico simulated datasets where the network structure is known. Comparisons demonstrate that most of these assessed network methods are not able to predict network structures from single cell expression data accurately, even if they are specifically developed for single cell methods. Also, single cell methods, which usually depend on more elaborative algorithms, in general have less similarity to each other in the sets of edges detected. The results from this study emphasize the importance for developing more accurate optimized network modeling methods that are compatible for single cell data. Newly-developed single cell methods may uniquely capture particular features of potential gene-gene relationships, and caution should be taken when we interpret these results.
Experimental Demonstration of In-Place Calibration for Time Domain Microwave Imaging System
NASA Astrophysics Data System (ADS)
Kwon, S.; Son, S.; Lee, K.
2018-04-01
In this study, the experimental demonstration of in-place calibration was conducted using the developed time domain measurement system. Experiments were conducted using three calibration methods—in-place calibration and two existing calibrations, that is, array rotation and differential calibration. The in-place calibration uses dual receivers located at an equal distance from the transmitter. The received signals at the dual receivers contain similar unwanted signals, that is, the directly received signal and antenna coupling. In contrast to the simulations, the antennas are not perfectly matched and there might be unexpected environmental errors. Thus, we experimented with the developed experimental system to demonstrate the proposed method. The possible problems with low signal-to-noise ratio and clock jitter, which may exist in time domain systems, were rectified by averaging repeatedly measured signals. The tumor was successfully detected using the three calibration methods according to the experimental results. The cross correlation was calculated using the reconstructed image of the ideal differential calibration for a quantitative comparison between the existing rotation calibration and the proposed in-place calibration. The mean value of cross correlation between the in-place calibration and ideal differential calibration was 0.80, and the mean value of cross correlation of the rotation calibration was 0.55. Furthermore, the results of simulation were compared with the experimental results to verify the in-place calibration method. A quantitative analysis was also performed, and the experimental results show a tendency similar to the simulation.
Geophysical exploration with audio frequency magnetic fields
NASA Astrophysics Data System (ADS)
Labson, V. F.
1985-12-01
Experience with the Audio Frequency Magnetic (AFMAG) method has demonstrated that an electromagnetic exploration system using the Earth's natural audiofrequency magnetic fields as an energy source, is capable of mapping subsurface electrical structure in the upper kilometer of the Earth's crust. The limitations are resolved by adapting the tensor analysis and remote reference noise bias removal techniques from the geomagnetic induction and magnetotelluric methods to the computation of the tippers. After a through spectral study of the natural magnetic fields, lightweight magnetic field sensors, capable of measuring the magnetic field throughout the year were designed. A digital acquisition and processing sytem, with the ability to provide audiofrequency tipper results in the field, was then built to complete the apparatus. The new instrumetnation was used in a study of the Mariposa, California site previously mapped with AFMAG. The usefulness of natural magnetic field data in mapping an electrical conductive body was again demonstrated. Several field examples are used to demonstrate that the proposed procedure yields reasonable results.
NASA Astrophysics Data System (ADS)
Zhou, Lin; Liu, Jihua; Wei, Shaohua; Ge, Xuefeng; Zhou, Jiahong; Yu, Boyang; Shen, Jian
2013-09-01
Many anticancer drugs have the capability to form stable complex with metal ions. Based on such property, a simple method to combine these drugs with transferrin, through the interaction between drug and Fe ion of transferrin, to improve their anticancer activity, is proposed. To demonstrate this technique, the complex of photosensitive anticancer drug hypocrellin A and transferrin was prepared by such facile method. The results indicated that the complex of hypocrellin A and transferrin can stabilize in aqueous solution. In vitro studies have demonstrated the superior cancer cell uptake ability of hypocrellin A-transferrin complex to the free hypocrellin A. Significant damage to such drug-impregnated tumor cells was observed upon irradiation and the cancer cells killing ability of hypocrellin A-transferrin was stronger than the free hypocrellin A within a certain range of concentrations. The above results demonstrated the validity and potential of our proposed strategy to prepare the drug delivery system of this type of anti-cancer drugs and transferrin.
A novel high-frequency encoding algorithm for image compression
NASA Astrophysics Data System (ADS)
Siddeq, Mohammed M.; Rodrigues, Marcos A.
2017-12-01
In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.
Benefits of a one health approach: An example using Rift Valley fever.
Rostal, Melinda K; Ross, Noam; Machalaba, Catherine; Cordel, Claudia; Paweska, Janusz T; Karesh, William B
2018-06-01
One Health has been promoted by international institutions as a framework to improve public health outcomes. Despite strong overall interest in One Health, country-, local- and project-level implementation remains limited, likely due to the lack of pragmatic and tested operational methods for implementation and metrics for evaluation. Here we use Rift Valley fever virus as an example to demonstrate the value of using a One Health approach for both scientific and resources advantages. We demonstrate that coordinated, a priori investigations between One Health sectors can yield higher statistical power to elucidate important public health relationships as compared to siloed investigations and post-hoc analyses. Likewise, we demonstrate that across a project or multi-ministry health study a One Health approach can result in improved resource efficiency, with resultant cost-savings (35% in the presented case). The results of these analyses demonstrate that One Health approaches can be directly and tangibly applied to health investigations.
Huang, Yunrui; Zhou, Qingxiang; Xie, Guohong
2013-01-01
Fungicides have been widely used throughout the world, and the resulted pollution has absorbed great attention in recent years. Present study described an effective measurement technique for fungicides including thiram, metalaxyl, diethofencarb, myclobutanil and tebuconazole in environmental water samples. A micro-solid phase extraction (μSPE) was developed utilizing ordered TiO(2) nanotube array for determination of target fungicides prior to a high performance liquid chromatography (HPLC). The experimental results indicated that TiO(2) nanotube arrays demonstrated excellent merits on the preconcentration of fungicides, and excellent linear relationship between peak area and the concentration of fungicides was obtained in the range of 0.1-50 μg L(-1). The detection limits for the targeted fungicides were in the range of 0.016-0.086 μg L(-1) (S/N=3). Four real environmental water samples were used to validate the applicability of the proposed method, and good spiked recoveries in the range of 73.9-114% were achieved. A comparison of present method with conventional solid phase extraction was made and the results exhibited that proposed method resulted in better recoveries. The results demonstrated that this μ-SPE technique was a viable alternative for the analysis of fungicides in complex samples. Copyright © 2012 Elsevier Ltd. All rights reserved.
Estimation of phase derivatives using discrete chirp-Fourier-transform-based method.
Gorthi, Sai Siva; Rastogi, Pramod
2009-08-15
Estimation of phase derivatives is an important task in many interferometric measurements in optical metrology. This Letter introduces a method based on discrete chirp-Fourier transform for accurate and direct estimation of phase derivatives, even in the presence of noise. The method is introduced in the context of the analysis of reconstructed interference fields in digital holographic interferometry. We present simulation and experimental results demonstrating the utility of the proposed method.
Improving condition severity classification with an efficient active learning based framework
Nissim, Nir; Boland, Mary Regina; Tatonetti, Nicholas P.; Elovici, Yuval; Hripcsak, George; Shahar, Yuval; Moskovitch, Robert
2017-01-01
Classification of condition severity can be useful for discriminating among sets of conditions or phenotypes, for example when prioritizing patient care or for other healthcare purposes. Electronic Health Records (EHRs) represent a rich source of labeled information that can be harnessed for severity classification. The labeling of EHRs is expensive and in many cases requires employing professionals with high level of expertise. In this study, we demonstrate the use of Active Learning (AL) techniques to decrease expert labeling efforts. We employ three AL methods and demonstrate their ability to reduce labeling efforts while effectively discriminating condition severity. We incorporate three AL methods into a new framework based on the original CAESAR (Classification Approach for Extracting Severity Automatically from Electronic Health Records) framework to create the Active Learning Enhancement framework (CAESAR-ALE). We applied CAESAR-ALE to a dataset containing 516 conditions of varying severity levels that were manually labeled by seven experts. Our dataset, called the “CAESAR dataset,” was created from the medical records of 1.9 million patients treated at Columbia University Medical Center (CUMC). All three AL methods decreased labelers’ efforts compared to the learning methods applied by the original CAESER framework in which the classifier was trained on the entire set of conditions; depending on the AL strategy used in the current study, the reduction ranged from 48% to 64% that can result in significant savings, both in time and money. As for the PPV (precision) measure, CAESAR-ALE achieved more than 13% absolute improvement in the predictive capabilities of the framework when classifying conditions as severe. These results demonstrate the potential of AL methods to decrease the labeling efforts of medical experts, while increasing accuracy given the same (or even a smaller) number of acquired conditions. We also demonstrated that the methods included in the CAESAR-ALE framework (Exploitation and Combination_XA) are more robust to the use of human labelers with different levels of professional expertise. PMID:27016383
Improving condition severity classification with an efficient active learning based framework.
Nissim, Nir; Boland, Mary Regina; Tatonetti, Nicholas P; Elovici, Yuval; Hripcsak, George; Shahar, Yuval; Moskovitch, Robert
2016-06-01
Classification of condition severity can be useful for discriminating among sets of conditions or phenotypes, for example when prioritizing patient care or for other healthcare purposes. Electronic Health Records (EHRs) represent a rich source of labeled information that can be harnessed for severity classification. The labeling of EHRs is expensive and in many cases requires employing professionals with high level of expertise. In this study, we demonstrate the use of Active Learning (AL) techniques to decrease expert labeling efforts. We employ three AL methods and demonstrate their ability to reduce labeling efforts while effectively discriminating condition severity. We incorporate three AL methods into a new framework based on the original CAESAR (Classification Approach for Extracting Severity Automatically from Electronic Health Records) framework to create the Active Learning Enhancement framework (CAESAR-ALE). We applied CAESAR-ALE to a dataset containing 516 conditions of varying severity levels that were manually labeled by seven experts. Our dataset, called the "CAESAR dataset," was created from the medical records of 1.9 million patients treated at Columbia University Medical Center (CUMC). All three AL methods decreased labelers' efforts compared to the learning methods applied by the original CAESER framework in which the classifier was trained on the entire set of conditions; depending on the AL strategy used in the current study, the reduction ranged from 48% to 64% that can result in significant savings, both in time and money. As for the PPV (precision) measure, CAESAR-ALE achieved more than 13% absolute improvement in the predictive capabilities of the framework when classifying conditions as severe. These results demonstrate the potential of AL methods to decrease the labeling efforts of medical experts, while increasing accuracy given the same (or even a smaller) number of acquired conditions. We also demonstrated that the methods included in the CAESAR-ALE framework (Exploitation and Combination_XA) are more robust to the use of human labelers with different levels of professional expertise. Copyright © 2016 Elsevier Inc. All rights reserved.
An evaluation of total starch and starch gelatinization methodologies in pelleted animal feed.
Zhu, L; Jones, C; Guo, Q; Lewis, L; Stark, C R; Alavi, S
2016-04-01
The quantification of total starch content (TS) or degree of starch gelatinization (DG) in animal feed is always challenging because of the potential interference from other ingredients. In this study, the differences in TS or DG measurement in pelleted swine feed due to variations in analytical methodology were quantified. Pelleted swine feed was used to create 6 different diets manufactured with various processing conditions in a 2 × 3 factorial design (2 conditioning temperatures, 77 or 88°C, and 3 conditioning retention times, 15, 30, or 60 s). Samples at each processing stage (cold mash, hot mash, hot pelletized feed, and final cooled pelletized feed) were collected for each of the 6 treatments and analyzed for TS and DG. Two different methodologies were evaluated for TS determination (the AOAC International method 996.11 vs. the modified glucoamylase method) and DG determination (the modified glucoamylase method vs. differential scanning calorimetry [DSC]). For TS determination, the AOAC International method 996.11 measured lower TS values in cold pellets compared with the modified glucoamylase method. The AOAC International method resulted in lower TS in cold mash than cooled pelletized feed, whereas the modified glucoamylase method showed no significant differences in TS content before or after pelleting. For DG, the modified glucoamylase method demonstrated increased DG with each processing step. Furthermore, increasing the conditioning temperature and time resulted in a greater DG when evaluated by the modified glucoamylase method. However, results demonstrated that DSC is not suitable as a quantitative tool for determining DG in multicomponent animal feeds due to interferences from nonstarch transformations, such as protein denaturation.
Evaluation of Techniques for Measuring Microbial Hazards in Bathing Waters: A Comparative Study
Schang, Christelle; Henry, Rebekah; Kolotelo, Peter A.; Prosser, Toby; Crosbie, Nick; Grant, Trish; Cottam, Darren; O’Brien, Peter; Coutts, Scott; Deletic, Ana; McCarthy, David T.
2016-01-01
Recreational water quality is commonly monitored by means of culture based faecal indicator organism (FIOs) assays. However, these methods are costly and time-consuming; a serious disadvantage when combined with issues such as non-specificity and user bias. New culture and molecular methods have been developed to counter these drawbacks. This study compared industry-standard IDEXX methods (Colilert and Enterolert) with three alternative approaches: 1) TECTA™ system for E. coli and enterococci; 2) US EPA’s 1611 method (qPCR based enterococci enumeration); and 3) Next Generation Sequencing (NGS). Water samples (233) were collected from riverine, estuarine and marine environments over the 2014–2015 summer period and analysed by the four methods. The results demonstrated that E. coli and coliform densities, inferred by the IDEXX system, correlated strongly with the TECTA™ system. The TECTA™ system had further advantages in faster turnaround times (~12 hrs from sample receipt to result compared to 24 hrs); no staff time required for interpretation and less user bias (results are automatically calculated, compared to subjective colorimetric decisions). The US EPA Method 1611 qPCR method also showed significant correlation with the IDEXX enterococci method; but had significant disadvantages such as highly technical analysis and higher operational costs (330% of IDEXX). The NGS method demonstrated statistically significant correlations between IDEXX and the proportions of sequences belonging to FIOs, Enterobacteriaceae, and Enterococcaceae. While costs (3,000% of IDEXX) and analysis time (300% of IDEXX) were found to be significant drawbacks of NGS, rapid technological advances in this field will soon see it widely adopted. PMID:27213772
Lu, Alex Xijie; Moses, Alan M
2016-01-01
Despite the importance of characterizing genes that exhibit subcellular localization changes between conditions in proteome-wide imaging experiments, many recent studies still rely upon manual evaluation to assess the results of high-throughput imaging experiments. We describe and demonstrate an unsupervised k-nearest neighbours method for the detection of localization changes. Compared to previous classification-based supervised change detection methods, our method is much simpler and faster, and operates directly on the feature space to overcome limitations in needing to manually curate training sets that may not generalize well between screens. In addition, the output of our method is flexible in its utility, generating both a quantitatively ranked list of localization changes that permit user-defined cut-offs, and a vector for each gene describing feature-wise direction and magnitude of localization changes. We demonstrate that our method is effective at the detection of localization changes using the Δrpd3 perturbation in Saccharomyces cerevisiae, where we capture 71.4% of previously known changes within the top 10% of ranked genes, and find at least four new localization changes within the top 1% of ranked genes. The results of our analysis indicate that simple unsupervised methods may be able to identify localization changes in images without laborious manual image labelling steps.
Detection of food intake from swallowing sequences by supervised and unsupervised methods.
Lopez-Meyer, Paulo; Makeyev, Oleksandr; Schuckers, Stephanie; Melanson, Edward L; Neuman, Michael R; Sazonov, Edward
2010-08-01
Studies of food intake and ingestive behavior in free-living conditions most often rely on self-reporting-based methods that can be highly inaccurate. Methods of Monitoring of Ingestive Behavior (MIB) rely on objective measures derived from chewing and swallowing sequences and thus can be used for unbiased study of food intake with free-living conditions. Our previous study demonstrated accurate detection of food intake in simple models relying on observation of both chewing and swallowing. This article investigates methods that achieve comparable accuracy of food intake detection using only the time series of swallows and thus eliminating the need for the chewing sensor. The classification is performed for each individual swallow rather than for previously used time slices and thus will lead to higher accuracy in mass prediction models relying on counts of swallows. Performance of a group model based on a supervised method (SVM) is compared to performance of individual models based on an unsupervised method (K-means) with results indicating better performance of the unsupervised, self-adapting method. Overall, the results demonstrate that highly accurate detection of intake of foods with substantially different physical properties is possible by an unsupervised system that relies on the information provided by the swallowing alone.
Detection of Food Intake from Swallowing Sequences by Supervised and Unsupervised Methods
Lopez-Meyer, Paulo; Makeyev, Oleksandr; Schuckers, Stephanie; Melanson, Edward L.; Neuman, Michael R.; Sazonov, Edward
2010-01-01
Studies of food intake and ingestive behavior in free-living conditions most often rely on self-reporting-based methods that can be highly inaccurate. Methods of Monitoring of Ingestive Behavior (MIB) rely on objective measures derived from chewing and swallowing sequences and thus can be used for unbiased study of food intake with free-living conditions. Our previous study demonstrated accurate detection of food intake in simple models relying on observation of both chewing and swallowing. This article investigates methods that achieve comparable accuracy of food intake detection using only the time series of swallows and thus eliminating the need for the chewing sensor. The classification is performed for each individual swallow rather than for previously used time slices and thus will lead to higher accuracy in mass prediction models relying on counts of swallows. Performance of a group model based on a supervised method (SVM) is compared to performance of individual models based on an unsupervised method (K-means) with results indicating better performance of the unsupervised, self-adapting method. Overall, the results demonstrate that highly accurate detection of intake of foods with substantially different physical properties is possible by an unsupervised system that relies on the information provided by the swallowing alone. PMID:20352335
Can quantile mapping improve precipitation extremes from regional climate models?
NASA Astrophysics Data System (ADS)
Tani, Satyanarayana; Gobiet, Andreas
2015-04-01
The ability of quantile mapping to accurately bias correct regard to precipitation extremes is investigated in this study. We developed new methods by extending standard quantile mapping (QMα) to improve the quality of bias corrected extreme precipitation events as simulated by regional climate model (RCM) output. The new QM version (QMβ) was developed by combining parametric and nonparametric bias correction methods. The new nonparametric method is tested with and without a controlling shape parameter (Qmβ1 and Qmβ0, respectively). Bias corrections are applied on hindcast simulations for a small ensemble of RCMs at six different locations over Europe. We examined the quality of the extremes through split sample and cross validation approaches of these three bias correction methods. This split-sample approach mimics the application to future climate scenarios. A cross validation framework with particular focus on new extremes was developed. Error characteristics, q-q plots and Mean Absolute Error (MAEx) skill scores are used for evaluation. We demonstrate the unstable behaviour of correction function at higher quantiles with QMα, whereas the correction functions with for QMβ0 and QMβ1 are smoother, with QMβ1 providing the most reasonable correction values. The result from q-q plots demonstrates that, all bias correction methods are capable of producing new extremes but QMβ1 reproduces new extremes with low biases in all seasons compared to QMα, QMβ0. Our results clearly demonstrate the inherent limitations of empirical bias correction methods employed for extremes, particularly new extremes, and our findings reveals that the new bias correction method (Qmß1) produces more reliable climate scenarios for new extremes. These findings present a methodology that can better capture future extreme precipitation events, which is necessary to improve regional climate change impact studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Yan-Lin, E-mail: yanlin.shao@dnvgl.com; Faltinsen, Odd M.
2014-10-01
We propose a new efficient and accurate numerical method based on harmonic polynomials to solve boundary value problems governed by 3D Laplace equation. The computational domain is discretized by overlapping cells. Within each cell, the velocity potential is represented by the linear superposition of a complete set of harmonic polynomials, which are the elementary solutions of Laplace equation. By its definition, the method is named as Harmonic Polynomial Cell (HPC) method. The characteristics of the accuracy and efficiency of the HPC method are demonstrated by studying analytical cases. Comparisons will be made with some other existing boundary element based methods,more » e.g. Quadratic Boundary Element Method (QBEM) and the Fast Multipole Accelerated QBEM (FMA-QBEM) and a fourth order Finite Difference Method (FDM). To demonstrate the applications of the method, it is applied to some studies relevant for marine hydrodynamics. Sloshing in 3D rectangular tanks, a fully-nonlinear numerical wave tank, fully-nonlinear wave focusing on a semi-circular shoal, and the nonlinear wave diffraction of a bottom-mounted cylinder in regular waves are studied. The comparisons with the experimental results and other numerical results are all in satisfactory agreement, indicating that the present HPC method is a promising method in solving potential-flow problems. The underlying procedure of the HPC method could also be useful in other fields than marine hydrodynamics involved with solving Laplace equation.« less
Applications of He's semi-inverse method, ITEM and GGM to the Davey-Stewartson equation
NASA Astrophysics Data System (ADS)
Zinati, Reza Farshbaf; Manafian, Jalil
2017-04-01
We investigate the Davey-Stewartson (DS) equation. Travelling wave solutions were found. In this paper, we demonstrate the effectiveness of the analytical methods, namely, He's semi-inverse variational principle method (SIVPM), the improved tan(φ/2)-expansion method (ITEM) and generalized G'/G-expansion method (GGM) for seeking more exact solutions via the DS equation. These methods are direct, concise and simple to implement compared to other existing methods. The exact solutions containing four types solutions have been achieved. The results demonstrate that the aforementioned methods are more efficient than the Ansatz method applied by Mirzazadeh (2015). Abundant exact travelling wave solutions including solitons, kink, periodic and rational solutions have been found by the improved tan(φ/2)-expansion and generalized G'/G-expansion methods. By He's semi-inverse variational principle we have obtained dark and bright soliton wave solutions. Also, the obtained semi-inverse variational principle has profound implications in physical understandings. These solutions might play important role in engineering and physics fields. Moreover, by using Matlab, some graphical simulations were done to see the behavior of these solutions.
Near Infrared Imaging as a Diagnostic Tool for Detecting Enamel Demineralization: An in vivo Study
NASA Astrophysics Data System (ADS)
Lucas, Seth Adam
Background and Objectives: For decades there has been an effort to develop alternative optical methods of imaging dental decay utilizing non-ionizing radiation methods. The purpose of this in-vivo study was to demonstrate whether NIR can be used as a diagnostic tool to evaluate dental caries and to compare the sensitivity and specificity of this method with that of conventional methods, including bitewing x-rays and visual inspection. Materials and Methods: 31 test subjects (n=31) from the UCSF orthodontic clinic undergoing orthodontic treatment with planned premolar extractions were recruited. Calibrated examiners performed caries detection examinations using conventional methods: bitewing radiographs and visual inspection. These findings were compared with the results from NIR examinations: transillumination and reflectance. To confirm the results found in the two different detection methods, a gold standard was used. After teeth were extracted, polarized light microscopy and transverse microradiography were performed. Results: A total of 87 premolars were used in the study. NIR identified the occlusal lesions with a sensitivity of 71% and a specificity of 77%, whereas, the visual examination had a sensitivity of only 40% and a specifity of 39%. For interproximal lesions halfway to DEJ, specificity remained constant, but sensitivity improved to 100% for NIR and 75% for x-rays. Conclusions: The results of this preliminary study demonstrate that NIR is just as effective at detecting enamel interproximal lesions as standard dental x-rays. NIR was more effective at detecting occlusal lesions than visual examination alone. NIR shows promise as an alternative diagnostic tool to the conventional methods of x-rays and visual examination and provides a non-ionizing radiation technique.
A coupled ALE-AMR method for shock hydrodynamics
Waltz, J.; Bakosi, J.
2018-03-05
We present a numerical method combining adaptive mesh refinement (AMR) with arbitrary Lagrangian-Eulerian (ALE) mesh motion for the simulation of shock hydrodynamics on unstructured grids. The primary goal of the coupled method is to use AMR to reduce numerical error in ALE simulations at reduced computational expense relative to uniform fine mesh calculations, in the same manner that AMR has been used in Eulerian simulations. We also identify deficiencies with ALE methods that AMR is able to mitigate, and discuss the unique coupling challenges. The coupled method is demonstrated using three-dimensional unstructured meshes of up to O(10 7) tetrahedral cells.more » Convergence of ALE-AMR solutions towards both uniform fine mesh ALE results and analytic solutions is demonstrated. Speed-ups of 5-10× for a given level of error are observed relative to uniform fine mesh calculations.« less
A coupled ALE-AMR method for shock hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waltz, J.; Bakosi, J.
We present a numerical method combining adaptive mesh refinement (AMR) with arbitrary Lagrangian-Eulerian (ALE) mesh motion for the simulation of shock hydrodynamics on unstructured grids. The primary goal of the coupled method is to use AMR to reduce numerical error in ALE simulations at reduced computational expense relative to uniform fine mesh calculations, in the same manner that AMR has been used in Eulerian simulations. We also identify deficiencies with ALE methods that AMR is able to mitigate, and discuss the unique coupling challenges. The coupled method is demonstrated using three-dimensional unstructured meshes of up to O(10 7) tetrahedral cells.more » Convergence of ALE-AMR solutions towards both uniform fine mesh ALE results and analytic solutions is demonstrated. Speed-ups of 5-10× for a given level of error are observed relative to uniform fine mesh calculations.« less
An automatic rat brain extraction method based on a deformable surface model.
Li, Jiehua; Liu, Xiaofeng; Zhuo, Jiachen; Gullapalli, Rao P; Zara, Jason M
2013-08-15
The extraction of the brain from the skull in medical images is a necessary first step before image registration or segmentation. While pre-clinical MR imaging studies on small animals, such as rats, are increasing, fully automatic imaging processing techniques specific to small animal studies remain lacking. In this paper, we present an automatic rat brain extraction method, the Rat Brain Deformable model method (RBD), which adapts the popular human brain extraction tool (BET) through the incorporation of information on the brain geometry and MR image characteristics of the rat brain. The robustness of the method was demonstrated on T2-weighted MR images of 64 rats and compared with other brain extraction methods (BET, PCNN, PCNN-3D). The results demonstrate that RBD reliably extracts the rat brain with high accuracy (>92% volume overlap) and is robust against signal inhomogeneity in the images. Copyright © 2013 Elsevier B.V. All rights reserved.
Aeroacoustics Computation for Nearly Fully Expanded Supersonic Jets Using the CE/SE Method
NASA Technical Reports Server (NTRS)
Loh, Ching Y.; Hultgren, Lennart S.; Wang, Xiao Y.; Chang, Sin-Chung; Jorgenson, Philip C. E.
2000-01-01
In this paper, the space-time conservation element solution element (CE/SE) method is tested in the classical axisymmetric jet instability problem, rendering good agreement with the linear theory. The CE/SE method is then applied to numerical simulations of several nearly fully expanded axisymmetric jet flows and their noise fields and qualitative agreement with available experimental and theoretical results is demonstrated.
NASA Astrophysics Data System (ADS)
Wu, Zhisheng; Tao, Ou; Cheng, Wei; Yu, Lu; Shi, Xinyuan; Qiao, Yanjiang
2012-02-01
This study demonstrated that near-infrared chemical imaging (NIR-CI) was a promising technology for visualizing the spatial distribution and homogeneity of Compound Liquorice Tablets. The starch distribution (indirectly, plant extraction) could be spatially determined using basic analysis of correlation between analytes (BACRA) method. The correlation coefficients between starch spectrum and spectrum of each sample were greater than 0.95. Depending on the accurate determination of starch distribution, a method to determine homogeneous distribution was proposed by histogram graph. The result demonstrated that starch distribution in sample 3 was relatively heterogeneous according to four statistical parameters. Furthermore, the agglomerates domain in each tablet was detected using score image layers of principal component analysis (PCA) method. Finally, a novel method named Standard Deviation of Macropixel Texture (SDMT) was introduced to detect agglomerates and heterogeneity based on binary image. Every binary image was divided into different sizes length of macropixel and the number of zero values in each macropixel was counted to calculate standard deviation. Additionally, a curve fitting graph was plotted on the relationship between standard deviation and the size length of macropixel. The result demonstrated the inter-tablet heterogeneity of both starch and total compounds distribution, simultaneously, the similarity of starch distribution and the inconsistency of total compounds distribution among intra-tablet were signified according to the value of slope and intercept parameters in the curve.
Historical land cover changes in the Great Lakes Region
Cole, K.L.; Davis, M.B.; Stearns, F.; Guntenspergen, G.; Walker, K.; Sisk, Thomas D.
1999-01-01
Two different methods of reconstructing historical vegetation change, drawing on General Land Office (GLO) surveys and fossil pollen deposits, are demonstrated by using data from the Great Lakes region. Both types of data are incorporated into landscape-scale analyses and presented through geographic information systems. Results from the two methods reinforce each other and allow reconstructions of past landscapes at different time scales. Changes to forests of the Great Lakes region during the last 150 years were far greater than the changes recorded over the preceding 1,000 years. Over the last 150 years, the total amount of forested land in the Great Lakes region declined by over 40%, and much of the remaining forest was converted to early successional forest types as a result of extensive logging. These results demonstrate the utility of using GLO survey data in conjunction with other data sources to reconstruct a generalized 'presettlement' condition and assess changes in landcover.
A fourth-order box method for solving the boundary layer equations
NASA Technical Reports Server (NTRS)
Wornom, S. F.
1977-01-01
A fourth order box method for calculating high accuracy numerical solutions to parabolic, partial differential equations in two variables or ordinary differential equations is presented. The method is the natural extension of the second order Keller Box scheme to fourth order and is demonstrated with application to the incompressible, laminar and turbulent boundary layer equations. Numerical results for high accuracy test cases show the method to be significantly faster than other higher order and second order methods.
ERIC Educational Resources Information Center
Olsson, Ulf Henning; Foss, Tron; Troye, Sigurd V.; Howell, Roy D.
2000-01-01
Used simulation to demonstrate how the choice of estimation method affects indexes of fit and parameter bias for different sample sizes when nested models vary in terms of specification error and the data demonstrate different levels of kurtosis. Discusses results for maximum likelihood (ML), generalized least squares (GLS), and weighted least…
K, Anbarasi; K, Kasim Mohamed; Vijayaraghavan, Phagalvarthy; Kandaswamy, Deivanayagam
2016-12-01
To design and implement flipped clinical training for undergraduate dental students in removable complete denture treatment and predict its effectiveness by comparing the assessment results of students trained by flipped and traditional methods. Flipped training was designed by shifting the learning from clinics to learning center (phase I) and by preserving the practice in clinics (phase II). In phase I, student-faculty interactive session was arranged to recap prior knowledge. This is followed by a display of audio synchronized video demonstration of the procedure in a repeatable way and subsequent display of possible errors that may occur in treatment with guidelines to overcome such errors. In phase II, live demonstration of the procedure was given. Students were asked to treat three patients under instructor's supervision. The summative assessment was conducted by applying the same checklist criterion and rubric scoring used for the traditional method. Assessment results of three batches of students trained by flipped method (study group) and three traditionally trained previous batches (control group) were taken for comparison by chi-square test. The sum of traditionally trained three batch students who prepared acceptable dentures (score: 2 and 3) and unacceptable dentures (score: 1) was compared with the same of flipped trained three batch students revealed that the number of students who demonstrated competency by preparing acceptable dentures was higher for flipped training (χ 2 =30.996 with p<0.001). The results reveal the supremacy of flipped training in enhancing students competency and hence recommended for training various clinical procedures.
Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David
2015-01-01
New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc. 2015.
Lehotay, Steven J; Han, Lijun; Sapozhnikova, Yelena
2016-01-01
This study demonstrated the application of an automated high-throughput mini-cartridge solid-phase extraction (mini-SPE) cleanup for the rapid low-pressure gas chromatography-tandem mass spectrometry (LPGC-MS/MS) analysis of pesticides and environmental contaminants in QuEChERS extracts of foods. Cleanup efficiencies and breakthrough volumes using different mini-SPE sorbents were compared using avocado, salmon, pork loin, and kale as representative matrices. Optimum extract load volume was 300 µL for the 45 mg mini-cartridges containing 20/12/12/1 (w/w/w/w) anh. MgSO 4 /PSA (primary secondary amine)/C 18 /CarbonX sorbents used in the final method. In method validation to demonstrate high-throughput capabilities and performance results, 230 spiked extracts of 10 different foods (apple, kiwi, carrot, kale, orange, black olive, wheat grain, dried basil, pork, and salmon) underwent automated mini-SPE cleanup and analysis over the course of 5 days. In all, 325 analyses for 54 pesticides and 43 environmental contaminants (3 analyzed together) were conducted using the 10 min LPGC-MS/MS method without changing the liner or retuning the instrument. Merely, 1 mg equivalent sample injected achieved <5 ng g -1 limits of quantification. With the use of internal standards, method validation results showed that 91 of the 94 analytes including pairs achieved satisfactory results (70-120 % recovery and RSD ≤ 25 %) in the 10 tested food matrices ( n = 160). Matrix effects were typically less than ±20 %, mainly due to the use of analyte protectants, and minimal human review of software data processing was needed due to summation function integration of analyte peaks. This study demonstrated that the automated mini-SPE + LPGC-MS/MS method yielded accurate results in rugged, high-throughput operations with minimal labor and data review.
Incorrect likelihood methods were used to infer scaling laws of marine predator search behaviour.
Edwards, Andrew M; Freeman, Mervyn P; Breed, Greg A; Jonsen, Ian D
2012-01-01
Ecologists are collecting extensive data concerning movements of animals in marine ecosystems. Such data need to be analysed with valid statistical methods to yield meaningful conclusions. We demonstrate methodological issues in two recent studies that reached similar conclusions concerning movements of marine animals (Nature 451:1098; Science 332:1551). The first study analysed vertical movement data to conclude that diverse marine predators (Atlantic cod, basking sharks, bigeye tuna, leatherback turtles and Magellanic penguins) exhibited "Lévy-walk-like behaviour", close to a hypothesised optimal foraging strategy. By reproducing the original results for the bigeye tuna data, we show that the likelihood of tested models was calculated from residuals of regression fits (an incorrect method), rather than from the likelihood equations of the actual probability distributions being tested. This resulted in erroneous Akaike Information Criteria, and the testing of models that do not correspond to valid probability distributions. We demonstrate how this led to overwhelming support for a model that has no biological justification and that is statistically spurious because its probability density function goes negative. Re-analysis of the bigeye tuna data, using standard likelihood methods, overturns the original result and conclusion for that data set. The second study observed Lévy walk movement patterns by mussels. We demonstrate several issues concerning the likelihood calculations (including the aforementioned residuals issue). Re-analysis of the data rejects the original Lévy walk conclusion. We consequently question the claimed existence of scaling laws of the search behaviour of marine predators and mussels, since such conclusions were reached using incorrect methods. We discourage the suggested potential use of "Lévy-like walks" when modelling consequences of fishing and climate change, and caution that any resulting advice to managers of marine ecosystems would be problematic. For reproducibility and future work we provide R source code for all calculations.
Preshaping command inputs to reduce telerobotic system oscillations
NASA Technical Reports Server (NTRS)
Singer, Neil C.; Seering, Warren P.
1989-01-01
The results of using a new technique for shaping inputs to a model of the space shuttle Remote Manipulator System (RMS) are presented. The shapes inputs move the system to the same location that was originally commanded, however, the oscillations of the machine are considerably reduced. An overview of the new shaping method is presented. A description of RMS model is provided. The problem of slow joint servo rates on the RMS is accommodated with an extension of the shaping method. The results and sample data are also presented for both joint and three-dimensional cartesian motions. The results demonstrate that the new shaping method performs well on large, telerobotic systems which exhibit significant structural vibration. The new method is shown to also result in considerable energy savings during operations of the RMS manipulator.
TECHNOLOGIES FORM MONITORING AND ...
A demonstration of technologies for determining the presence of dioxin and dioxin-like compounds in soil and sediment was conducted under EPA's Superfund Innovative Technology Evaluation Program in Saginaw, Michigan in April 2004. This report describes the evaluation of Wako Pure Chemical Industries's Dioxin ELISA Kit. The kit is an immunoassay technique that reports toxicity equivalents (TEQ) of dioxin/furans. The sample units are in pg/g 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) equivalents (EQ). The technology results were compared to high resolution mass spectrometry (HRMS) TEQ results generated using EPA Method 1613B.The Wako results were biased both positively and negatively relative to HRMS results. The technologys estimated method detection limit was 83-201 pg/g 2,3,7,8-TCDD EQ, but this should be considered a rough estimate. Results from this demonstration suggest that the Wako kit could be an effective screening tool for determining sample results above and below 20 pg/g TEQ, and even more effective as a screen for samples above and below 50 pg/g TEQ, particularly considering the cost to analyze the 209 demonstration samples was significantly less than that of the reference laboratory ($150,294 vs. $213,580), and all samples were analyzed on-site in 9 days (in comparison to the reference laboratory which took 8 months). The objective of this program is to promote the acceptance and use of innovative field technologies by providing well-documented per
A developed nearly analytic discrete method for forward modeling in the frequency domain
NASA Astrophysics Data System (ADS)
Liu, Shaolin; Lang, Chao; Yang, Hui; Wang, Wenshuai
2018-02-01
High-efficiency forward modeling methods play a fundamental role in full waveform inversion (FWI). In this paper, the developed nearly analytic discrete (DNAD) method is proposed to accelerate frequency-domain forward modeling processes. We first derive the discretization of frequency-domain wave equations via numerical schemes based on the nearly analytic discrete (NAD) method to obtain a linear system. The coefficients of numerical stencils are optimized to make the linear system easier to solve and to minimize computing time. Wavefield simulation and numerical dispersion analysis are performed to compare the numerical behavior of DNAD method with that of the conventional NAD method. The results demonstrate the superiority of our proposed method. Finally, the DNAD method is implemented in frequency-domain FWI, and high-resolution inverse results are obtained.
Overview of BioCreative II gene mention recognition.
Smith, Larry; Tanabe, Lorraine K; Ando, Rie Johnson nee; Kuo, Cheng-Ju; Chung, I-Fang; Hsu, Chun-Nan; Lin, Yu-Shi; Klinger, Roman; Friedrich, Christoph M; Ganchev, Kuzman; Torii, Manabu; Liu, Hongfang; Haddow, Barry; Struble, Craig A; Povinelli, Richard J; Vlachos, Andreas; Baumgartner, William A; Hunter, Lawrence; Carpenter, Bob; Tsai, Richard Tzong-Han; Dai, Hong-Jie; Liu, Feng; Chen, Yifei; Sun, Chengjie; Katrenko, Sophia; Adriaans, Pieter; Blaschke, Christian; Torres, Rafael; Neves, Mariana; Nakov, Preslav; Divoli, Anna; Maña-López, Manuel; Mata, Jacinto; Wilbur, W John
2008-01-01
Nineteen teams presented results for the Gene Mention Task at the BioCreative II Workshop. In this task participants designed systems to identify substrings in sentences corresponding to gene name mentions. A variety of different methods were used and the results varied with a highest achieved F1 score of 0.8721. Here we present brief descriptions of all the methods used and a statistical analysis of the results. We also demonstrate that, by combining the results from all submissions, an F score of 0.9066 is feasible, and furthermore that the best result makes use of the lowest scoring submissions.
Overview of BioCreative II gene mention recognition
Smith, Larry; Tanabe, Lorraine K; Ando, Rie Johnson nee; Kuo, Cheng-Ju; Chung, I-Fang; Hsu, Chun-Nan; Lin, Yu-Shi; Klinger, Roman; Friedrich, Christoph M; Ganchev, Kuzman; Torii, Manabu; Liu, Hongfang; Haddow, Barry; Struble, Craig A; Povinelli, Richard J; Vlachos, Andreas; Baumgartner, William A; Hunter, Lawrence; Carpenter, Bob; Tsai, Richard Tzong-Han; Dai, Hong-Jie; Liu, Feng; Chen, Yifei; Sun, Chengjie; Katrenko, Sophia; Adriaans, Pieter; Blaschke, Christian; Torres, Rafael; Neves, Mariana; Nakov, Preslav; Divoli, Anna; Maña-López, Manuel; Mata, Jacinto; Wilbur, W John
2008-01-01
Nineteen teams presented results for the Gene Mention Task at the BioCreative II Workshop. In this task participants designed systems to identify substrings in sentences corresponding to gene name mentions. A variety of different methods were used and the results varied with a highest achieved F1 score of 0.8721. Here we present brief descriptions of all the methods used and a statistical analysis of the results. We also demonstrate that, by combining the results from all submissions, an F score of 0.9066 is feasible, and furthermore that the best result makes use of the lowest scoring submissions. PMID:18834493
NASA Astrophysics Data System (ADS)
Nemati, Maedeh; Shateri Najaf Abady, Ali Reza; Toghraie, Davood; Karimipour, Arash
2018-01-01
The incorporation of different equations of state into single-component multiphase lattice Boltzmann model is considered in this paper. The original pseudopotential model is first detailed, and several cubic equations of state, the Redlich-Kwong, Redlich-Kwong-Soave, and Peng-Robinson are then incorporated into the lattice Boltzmann model. A comparison of the numerical simulation achievements on the basis of density ratios and spurious currents is used for presentation of the details of phase separation in these non-ideal single-component systems. The paper demonstrates that the scheme for the inter-particle interaction force term as well as the force term incorporation method matters to achieve more accurate and stable results. The velocity shifting method is demonstrated as the force term incorporation method, among many, with accuracy and stability results. Kupershtokh scheme also makes it possible to achieve large density ratio (up to 104) and to reproduce the coexistence curve with high accuracy. Significant reduction of the spurious currents at vapor-liquid interface is another observation. High-density ratio and spurious current reduction resulted from the Redlich-Kwong-Soave and Peng-Robinson EOSs, in higher accordance with the Maxwell construction results.
ERIC Educational Resources Information Center
Boden, Andrea; Archwamety, Teara; McFarland, Max
This review used meta-analytic techniques to integrate findings from 30 independent studies that compared programmed instruction to conventional methods of instruction at the secondary level. The meta-analysis demonstrated that programmed instruction resulted in higher achievement when compared to conventional methods of instruction (average…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogdanov, Yu. I., E-mail: bogdanov-yurii@inbox.ru; Avosopyants, G. V.; Belinskii, L. V.
We describe a new method for reconstructing the quantum state of the electromagnetic field from the results of mutually complementary optical quadrature measurements. This method is based on the root approach and displaces squeezed Fock states are used as the basis. Theoretical analysis and numerical experiments demonstrate the considerable advantage of the developed tools over those described in the literature.
The Development of MST Test Information for the Prediction of Test Performances
ERIC Educational Resources Information Center
Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.
2017-01-01
The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…
Video-based noncooperative iris image segmentation.
Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig
2011-02-01
In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.
Quality measures in applications of image restoration.
Kriete, A; Naim, M; Schafer, L
2001-01-01
We describe a new method for the estimation of image quality in image restoration applications. We demonstrate this technique on a simulated data set of fluorescent beads, in comparison with restoration by three different deconvolution methods. Both the number of iterations and a regularisation factor are varied to enforce changes in the resulting image quality. First, the data sets are directly compared by an accuracy measure. These values serve to validate the image quality descriptor, which is developed on the basis of optical information theory. This most general measure takes into account the spectral energies and the noise, weighted in a logarithmic fashion. It is demonstrated that this method is particularly helpful as a user-oriented method to control the output of iterative image restorations and to eliminate the guesswork in choosing a suitable number of iterations.
Yang, James J; Li, Jia; Williams, L Keoki; Buu, Anne
2016-01-05
In genome-wide association studies (GWAS) for complex diseases, the association between a SNP and each phenotype is usually weak. Combining multiple related phenotypic traits can increase the power of gene search and thus is a practically important area that requires methodology work. This study provides a comprehensive review of existing methods for conducting GWAS on complex diseases with multiple phenotypes including the multivariate analysis of variance (MANOVA), the principal component analysis (PCA), the generalizing estimating equations (GEE), the trait-based association test involving the extended Simes procedure (TATES), and the classical Fisher combination test. We propose a new method that relaxes the unrealistic independence assumption of the classical Fisher combination test and is computationally efficient. To demonstrate applications of the proposed method, we also present the results of statistical analysis on the Study of Addiction: Genetics and Environment (SAGE) data. Our simulation study shows that the proposed method has higher power than existing methods while controlling for the type I error rate. The GEE and the classical Fisher combination test, on the other hand, do not control the type I error rate and thus are not recommended. In general, the power of the competing methods decreases as the correlation between phenotypes increases. All the methods tend to have lower power when the multivariate phenotypes come from long tailed distributions. The real data analysis also demonstrates that the proposed method allows us to compare the marginal results with the multivariate results and specify which SNPs are specific to a particular phenotype or contribute to the common construct. The proposed method outperforms existing methods in most settings and also has great applications in GWAS on complex diseases with multiple phenotypes such as the substance abuse disorders.
Data accuracy assessment using enterprise architecture
NASA Astrophysics Data System (ADS)
Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias
2011-02-01
Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.
How-To-Do-It: Plant Regeneration.
ERIC Educational Resources Information Center
Pietraface, William J.
1988-01-01
Describes a procedure for the growth of tobacco plants in flasks. Demonstrates plant tissue culture manipulation, totipotency, and plant regeneration in approximately 12 weeks. Discusses methods, materials, and expected results. (CW)
A high precision extrapolation method in multiphase-field model for simulating dendrite growth
NASA Astrophysics Data System (ADS)
Yang, Cong; Xu, Qingyan; Liu, Baicheng
2018-05-01
The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.
Simultaneous optimization method for absorption spectroscopy postprocessing.
Simms, Jean M; An, Xinliang; Brittelle, Mack S; Ramesh, Varun; Ghandhi, Jaal B; Sanders, Scott T
2015-05-10
A simultaneous optimization method is proposed for absorption spectroscopy postprocessing. This method is particularly useful for thermometry measurements based on congested spectra, as commonly encountered in combustion applications of H2O absorption spectroscopy. A comparison test demonstrated that the simultaneous optimization method had greater accuracy, greater precision, and was more user-independent than the common step-wise postprocessing method previously used by the authors. The simultaneous optimization method was also used to process experimental data from an environmental chamber and a constant volume combustion chamber, producing results with errors on the order of only 1%.
NASA Astrophysics Data System (ADS)
Fan, Tiantian; Yu, Hongbin
2018-03-01
A novel shape from focus method combining 3D steerable filter for improved performance on treating textureless region was proposed in this paper. Different from conventional spatial methods focusing on the search of maximum edges' response to estimate the depth map, the currently proposed method took both of the edges' response and the axial imaging blur degree into consideration during treatment. As a result, more robust and accurate identification for the focused location can be achieved, especially when treating textureless objects. Improved performance in depth measurement has been successfully demonstrated from both of the simulation and experiment results.
Nkouawa, Agathe; Sako, Yasuhito; Li, Tiaoying; Chen, Xingwang; Nakao, Minoru; Yanagida, Tetsuya; Okamoto, Munehiro; Giraudoux, Patrick; Raoul, Francis; Nakaya, Kazuhiro; Xiao, Ning; Qiu, Jiamin; Qiu, Dongchuan; Craig, Philip S; Ito, Akira
2012-12-01
In this study, we applied a loop-mediated isothermal amplification method for identification of human Taenia tapeworms in Tibetan communities in Sichuan, China. Out of 51 proglottids recovered from 35 carriers, 9, 1, and 41 samples were identified as Taenia solium, Taenia asiatica and Taenia saginata, respectively. Same results were obtained afterwards in the laboratory, except one sample. These results demonstrated that the LAMP method enabled rapid identification of parasites in the field surveys, which suggested that this method would contribute to the control of Taenia infections in endemic areas. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Shape design sensitivity analysis and optimal design of structural systems
NASA Technical Reports Server (NTRS)
Choi, Kyung K.
1987-01-01
The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to relate variations in structural shape to measures of structural performance. A domain method of shape design sensitivity analysis is used to best utilize the basic character of the finite element method that gives accurate information not on the boundary but in the domain. Implementation of shape design sensitivty analysis using finite element computer codes is discussed. Recent numerical results are used to demonstrate the accuracy obtainable using the method. Result of design sensitivity analysis is used to carry out design optimization of a built-up structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomsen, K.O.; Richardson, C.B.; Valder, K.M.
1996-12-31
Millions of acres of US government property are contaminated with unexploded ordnance (UXO) as a result of weapons system testing and troop training activities conducted over the past century at Department of Defense (DoD) sites. Recent DoD downsizing has resulted in the closing of many military bases, many of which are contaminated with UXO. One unexpected result of DoD`s downsizing is the attention focused on the unique problems associated with UXO remediation at these closed military bases. The U.S. Army Environmental Center (U SAEC) is the lead DoD agency for UXO clearance technology demonstrations, evaluation, and technology transfer. USAEC directedmore » the Naval Explosive Ordnance Disposal Technology Division (NAVEODTECHDIV) to serve as the technical lead for the advanced technology demonstration (ATD) program. In 1994, USAEC and NAVEODTECHDIV created controlled test facilities at the U.S. Army Jefferson Proving Ground in Madison, Indiana, to demonstrate and evaluate commercial UXO clearance systems and technologies. Phase I controlled site demonstrations were conducted during the summer of 1994. These demonstrations were followed by the Phase II controlled site demonstrations at JPG. This paper presents the results of the Phase II controlled site demonstrations. The overall performance of the demonstrators is presented along with the operational characteristics and limitations of the various systems and technologies evaluated. Individual demonstrator performance statistics are evaluated by sensor type and sensor transport method.« less
Ceol, M; Forino, M; Gambaro, G; Sauer, U; Schleicher, E D; D'Angelo, A; Anglani, F
2001-01-01
Gene expression can be examined with different techniques including ribonuclease protection assay (RPA), in situ hybridisation (ISH), and quantitative reverse transcription-polymerase chain reaction (RT/PCR). These methods differ considerably in their sensitivity and precision in detecting and quantifying low abundance mRNA. Although there is evidence that RT/PCR can be performed in a quantitative manner, the quantitative capacity of this method is generally underestimated. To demonstrate that the comparative kinetic RT/PCR strategy-which uses a housekeeping gene as internal standard-is a quantitative method to detect significant differences in mRNA levels between different samples, the inhibitory effect of heparin on phorbol 12-myristate 13-acetate (PMA)-induced-TGF-beta1 mRNA expression was evaluated by RT/PCR and RPA, the standard method of mRNA quantification, and the results were compared. The reproducibility of RT/PCR amplification was calculated by comparing the quantity of G3PDH and TGF-beta1 PCR products, generated during the exponential phases, estimated from two different RT/PCR (G3PDH, r = 0.968, P = 0.0000; TGF-beta1, r = 0.966, P = 0.0000). The quantitative capacity of comparative kinetic RT/PCR was demonstrated by comparing the results obtained from RPA and RT/PCR using linear regression analysis. Starting from the same RNA extraction, but using only 1% of the RNA for the RT/PCR compared to RPA, significant correlation was observed (r = 0.984, P = 0.0004). Moreover the morphometric analysis of ISH signal was applied for the semi-quantitative evaluation of the expression and localisation of TGF-beta1 mRNA in the entire cell population. Our results demonstrate the close similarity of the RT/PCR and RPA methods in giving quantitative information on mRNA expression and indicate the possibility to adopt the comparative kinetic RT/PCR as reliable quantitative method of mRNA analysis. Copyright 2001 Wiley-Liss, Inc.
Abramyan, Tigran M.; Hyde-Volpe, David L.; Stuart, Steven J.; Latour, Robert A.
2017-01-01
The use of standard molecular dynamics simulation methods to predict the interactions of a protein with a material surface have the inherent limitations of lacking the ability to determine the most likely conformations and orientations of the adsorbed protein on the surface and to determine the level of convergence attained by the simulation. In addition, standard mixing rules are typically applied to combine the nonbonded force field parameters of the solution and solid phases the system to represent interfacial behavior without validation. As a means to circumvent these problems, the authors demonstrate the application of an efficient advanced sampling method (TIGER2A) for the simulation of the adsorption of hen egg-white lysozyme on a crystalline (110) high-density polyethylene surface plane. Simulations are conducted to generate a Boltzmann-weighted ensemble of sampled states using force field parameters that were validated to represent interfacial behavior for this system. The resulting ensembles of sampled states were then analyzed using an in-house-developed cluster analysis method to predict the most probable orientations and conformations of the protein on the surface based on the amount of sampling performed, from which free energy differences between the adsorbed states were able to be calculated. In addition, by conducting two independent sets of TIGER2A simulations combined with cluster analyses, the authors demonstrate a method to estimate the degree of convergence achieved for a given amount of sampling. The results from these simulations demonstrate that these methods enable the most probable orientations and conformations of an adsorbed protein to be predicted and that the use of our validated interfacial force field parameter set provides closer agreement to available experimental results compared to using standard CHARMM force field parameterization to represent molecular behavior at the interface. PMID:28514864
Unuvar, Umit; Yilmaz, Deniz; Ozyildirim, Ilker; Dokudan, Erenc Y; Korkmaz, Canan; Doğanoğlu, Senem; Kutlu, Levent; Fincanci, Sebnem Korur
2017-01-01
Turkey has experienced a wave of demonstrations in the summer of 2013, called Gezi Park Demonstrations. Between 31 May and 30 August, 297 people who had been subjected to trauma by several methods of demonstration control and Riot Control Agents applied to the Human Rights Foundation of Turkey Rehabilitation Centers to receive treatment/rehabilitation and/or documentation. 296 patients except one 5-year-old child were included in the study. Of the 296 patients; 175 were male, 120 were female, and one was a transgender individual. The highest number of applications was received by the Istanbul center with 216 patients. The mean age of applicants was 33.85, and the age range was 15-71 years. While 268 of applicants (91%) stated that they had been exposed to Riot Control Agents, 62 patients suffered only chemical exposure who had no other traumatic injuries whereas 234 patients suffered at least one blunt trauma injury. Blunt trauma injuries are due to being shot by gas canisters in 127 patients (43%), by plastic bullets in 31 patients (10%). 59 patients (20%) were severely beaten, and 30 patients (10%) were injured by pressurized cold water ejected by water cannons. Thirteen patients (4.4%) suffered injuries that caused loss of vision or eye. Psychiatric evaluations were carried out for 117 patients while 43% of them were diagnosed with Acute Stress Disorder. Post Traumatic Stress Disorder and Major Depressive Disorder followed this diagnosis. This study includes the medical evaluation of injuries allegedly sustained during Gezi Park demonstrations in 2013 as a result of several methods of demonstration control and/or by being exposed to Riot Control Agents. The aim is to discuss different types of injuries due to those methods and health consequences of Riot Control Agents. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
A demonstration of the antimicrobial effectiveness of various copper surfaces
2013-01-01
Background Bacterial contamination on touch surfaces results in increased risk of infection. In the last few decades, work has been done on the antimicrobial properties of copper and its alloys against a range of micro-organisms threatening public health in food processing, healthcare and air conditioning applications; however, an optimum copper method of surface deposition and mass structure has not been identified. Results A proof-of-concept study of the disinfection effectiveness of three copper surfaces was performed. The surfaces were produced by the deposition of copper using three methods of thermal spray, namely, plasma spray, wire arc spray and cold spray The surfaces were then inoculated with meticillin-resistant Staphylococcus aureus (MRSA). After a two hour exposure to the surfaces, the surviving MRSA were assayed and the results compared. The differences in the copper depositions produced by the three thermal spray methods were examined in order to explain the mechanism that causes the observed differences in MRSA killing efficiencies. The cold spray deposition method was significantly more effective than the other methods. It was determined that work hardening caused by the high velocity particle impacts created by the cold spray technique results in a copper microstructure that enhances ionic diffusion, and copper ions are principally responsible for antimicrobial activity. Conclusions This test showed significant microbiologic differences between coatings produced by different spray techniques and demonstrates the importance of the copper application technique. The cold spray technique shows superior anti-microbial effectiveness caused by the high impact velocity imparted to the sprayed particles which results in high dislocation density and high ionic diffusivity. PMID:23537176
Quantitative phase microscopy via optimized inversion of the phase optical transfer function.
Jenkins, Micah H; Gaylord, Thomas K
2015-10-01
Although the field of quantitative phase imaging (QPI) has wide-ranging biomedical applicability, many QPI methods are not well-suited for such applications due to their reliance on coherent illumination and specialized hardware. By contrast, methods utilizing partially coherent illumination have the potential to promote the widespread adoption of QPI due to their compatibility with microscopy, which is ubiquitous in the biomedical community. Described herein is a new defocus-based reconstruction method that utilizes a small number of efficiently sampled micrographs to optimally invert the partially coherent phase optical transfer function under assumptions of weak absorption and slowly varying phase. Simulation results are provided that compare the performance of this method with similar algorithms and demonstrate compatibility with large phase objects. The accuracy of the method is validated experimentally using a microlens array as a test phase object. Lastly, time-lapse images of live adherent cells are obtained with an off-the-shelf microscope, thus demonstrating the new method's potential for extending QPI capability widely in the biomedical community.
Neděla, Vilém; Tihlaříková, Eva; Hřib, Jiří
2015-01-01
The use of non-standard low-temperature conditions in environmental scanning electron microscopy might be promising for the observation of coniferous tissues in their native state. This study is aimed to analyse and evaluate the method based on the principle of low-temperature sample stabilization. We demonstrate that the upper mucous layer is sublimed and a microstructure of the sample surface can be observed with higher resolution at lower gas pressure conditions, thanks to a low-temperature method. An influence of the low-temperature method on sample stability was also studied. The results indicate that high-moisture conditions are not suitable for this method and often cause the collapse of samples. The potential improvement of stability to beam damage has been demonstrated by long-time observation at different operation parameters. We finally show high applicability of the low-temperature method on different types of conifers and Oxalis acetosella. © 2014 Wiley Periodicals, Inc.
Study on Privacy Protection Algorithm Based on K-Anonymity
NASA Astrophysics Data System (ADS)
FeiFei, Zhao; LiFeng, Dong; Kun, Wang; Yang, Li
Basing on the study of K-Anonymity algorithm in privacy protection issue, this paper proposed a "Degree Priority" method of visiting Lattice nodes on the generalization tree to improve the performance of K-Anonymity algorithm. This paper also proposed a "Two Times K-anonymity" methods to reduce the information loss in the process of K-Anonymity. Finally, we used experimental results to demonstrate the effectiveness of these methods.
Rotational control of computer generated holograms.
Preece, Daryl; Rubinsztein-Dunlop, Halina
2017-11-15
We develop a basis for three-dimensional rotation of arbitrary light fields created by computer generated holograms. By adding an extra phase function into the kinoform, any light field or holographic image can be tilted in the focal plane with minimized distortion. We present two different approaches to rotate an arbitrary hologram: the Scheimpflug method and a novel coordinate transformation method. Experimental results are presented to demonstrate the validity of both proposed methods.
NASA Astrophysics Data System (ADS)
Lao, Zhiqiang; Zheng, Xin
2011-03-01
This paper proposes a multiscale method to quantify tissue spiculation and distortion in mammography CAD systems that aims at improving the sensitivity in detecting architectural distortion and spiculated mass. This approach addresses the difficulty of predetermining the neighborhood size for feature extraction in characterizing lesions demonstrating spiculated mass/architectural distortion that may appear in different sizes. The quantification is based on the recognition of tissue spiculation and distortion pattern using multiscale first-order phase portrait model in texture orientation field generated by Gabor filter bank. A feature map is generated based on the multiscale quantification for each mammogram and two features are then extracted from the feature map. These two features will be combined with other mass features to provide enhanced discriminate ability in detecting lesions demonstrating spiculated mass and architectural distortion. The efficiency and efficacy of the proposed method are demonstrated with results obtained by applying the method to over 500 cancer cases and over 1000 normal cases.
Simulation of Watts Bar Unit 1 Initial Startup Tests with Continuous Energy Monte Carlo Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Godfrey, Andrew T; Gehin, Jess C; Bekar, Kursat B
2014-01-01
The Consortium for Advanced Simulation of Light Water Reactors* is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highlymore » detailed and rigorous KENO solutions provide a reliable nu-meric reference for VERAneutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients.« less
Experimental demonstration of tri-aperture Differential Synthetic Aperture Ladar
NASA Astrophysics Data System (ADS)
Zhao, Zhilong; Huang, Jianyu; Wu, Shudong; Wang, Kunpeng; Bai, Tao; Dai, Ze; Kong, Xinyi; Wu, Jin
2017-04-01
A tri-aperture Differential Synthetic Aperture Ladar (DSAL) is demonstrated in laboratory, which is configured by using one common aperture to transmit the illuminating laser and another two along-track receiving apertures to collect back-scattered laser signal for optical heterodyne detection. The image formation theory on this tri-aperture DSAL shows that there are two possible methods to reconstruct the azimuth Phase History Data (PHD) for aperture synthesis by following standard DSAL principle, either method resulting in a different matched filter as well as an azimuth image resolution. The experimental setup of the tri-aperture DSAL adopts a frequency chirped laser of about 40 mW in 1550 nm wavelength range as the illuminating source and an optical isolator composed of a polarizing beam-splitter and a quarter wave plate to virtually line the three apertures in the along-track direction. Various DSAL images up to target distance of 12.9 m are demonstrated using both PHD reconstructing methods.
Scenario for Hollow Cathode End-Of-Life
NASA Technical Reports Server (NTRS)
Sarver-Verhey, Timothy R.
2000-01-01
Recent successful hollow cathode life tests have demonstrated that lifetimes can meet the requirements of several space applications. However, there are no methods for assessing cathode lifetime short of demonstrating the requirement. Previous attempts to estimate or predict cathode lifetime were based on relatively simple chemical depletion models derived from the dispenser cathode community. To address this lack of predicative capability, a scenario for hollow cathode lifetime under steady-state operating conditions is proposed. This scenario has been derived primarily from the operating behavior and post-test condition of a hollow cathode that was operated for 28,000 hours. In this scenario, the insert chemistry evolves through three relatively distinct phases over the course of the cathode lifetime. These phases are believed to correspond to demonstrable changes in cathode operation. The implications for cathode lifetime limits resulting from this scenario are examined, including methods to assess cathode lifetime without operating to End-of- Life and methods to extend the cathode lifetime.
Ihme, Matthias; Marsden, Alison L; Pitsch, Heinz
2008-02-01
A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.
Garelnabi, Mahdi; Litvinov, Dmitry; Parthasarathy, Sampath
2010-01-01
Background: Azelaic acid (AzA) is the best known dicarboxilic acid to have pharmaceutical benefits and clinical applications and also to be associated with some diseases pathophysiology. Materials and Methods: We extracted and methylesterified AzA and determined its concentration in human plasma obtained from healthy individuals and also in mice fed AzA containing diet for three months. Results: AzA was detected in Gas Chromatography (GC) and confirmed by Liquid chromatography mass spectrometry (LCMS), and gas chromatography mass spectrometry (GCMC). Our results have shown that AzA can be determined efficiently in selected biological samples by GC method with 1nM limit of detection (LoD) and the limit of quantification (LoQ); was established at 50nM. Analytical Sensitivity as assayed by hexane demonstrated an analytical sensitivity at 0.050nM. The method has demonstrated 8-10% CV batch repeatability across the sample types and 13-18.9% CV for the Within-Lab Precision analysis. The method has shown that AzA can efficiently be recovered from various sample preparation including liver tissue homogenate (95%) and human plasma (97%). Conclusions: Because of its simplicity and lower limit of quantification, the present method provides a useful tool for determining AzA in various biological sample preparations. PMID:22558586
Increasing the computational efficient of digital cross correlation by a vectorization method
NASA Astrophysics Data System (ADS)
Chang, Ching-Yuan; Ma, Chien-Ching
2017-08-01
This study presents a vectorization method for use in MATLAB programming aimed at increasing the computational efficiency of digital cross correlation in sound and images, resulting in a speedup of 6.387 and 36.044 times compared with performance values obtained from looped expression. This work bridges the gap between matrix operations and loop iteration, preserving flexibility and efficiency in program testing. This paper uses numerical simulation to verify the speedup of the proposed vectorization method as well as experiments to measure the quantitative transient displacement response subjected to dynamic impact loading. The experiment involved the use of a high speed camera as well as a fiber optic system to measure the transient displacement in a cantilever beam under impact from a steel ball. Experimental measurement data obtained from the two methods are in excellent agreement in both the time and frequency domain, with discrepancies of only 0.68%. Numerical and experiment results demonstrate the efficacy of the proposed vectorization method with regard to computational speed in signal processing and high precision in the correlation algorithm. We also present the source code with which to build MATLAB-executable functions on Windows as well as Linux platforms, and provide a series of examples to demonstrate the application of the proposed vectorization method.
NASA Astrophysics Data System (ADS)
Bullock, J. G.; Ross, D. A.
The fibre optic Doppler anemometer (FODA) has been used to develop an accurate quantitative method of routinely assessing bull fertility. This method is of importance to the artificial insemination industry because the present qualitative estimation, performed by viewing semen using a microscope, can only set broad limits of quality. Laser light from the FODA was directed into diluted semen samples and the back scattered light was measured. A digital correlator was used to calculate the signal correlation of the back scattered light. The resultant data curves were interpreted in terms of the collective motility and swimming speed of the spermatozoa using a microcomputer. These two parameters are accepted as being indicative of fertility. The accuracy of this method is demonstrated by examination of results obtained in an experiment where enzymes, thought to alter fertility, were added to semen. The effect of the enzymes on the swimming speed and motility was clearly demonstrated.
Application of Exactly Linearized Error Transport Equations to AIAA CFD Prediction Workshops
NASA Technical Reports Server (NTRS)
Derlaga, Joseph M.; Park, Michael A.; Rallabhandi, Sriram
2017-01-01
The computational fluid dynamics (CFD) prediction workshops sponsored by the AIAA have created invaluable opportunities in which to discuss the predictive capabilities of CFD in areas in which it has struggled, e.g., cruise drag, high-lift, and sonic boom pre diction. While there are many factors that contribute to disagreement between simulated and experimental results, such as modeling or discretization error, quantifying the errors contained in a simulation is important for those who make decisions based on the computational results. The linearized error transport equations (ETE) combined with a truncation error estimate is a method to quantify one source of errors. The ETE are implemented with a complex-step method to provide an exact linearization with minimal source code modifications to CFD and multidisciplinary analysis methods. The equivalency of adjoint and linearized ETE functional error correction is demonstrated. Uniformly refined grids from a series of AIAA prediction workshops demonstrate the utility of ETE for multidisciplinary analysis with a connection between estimated discretization error and (resolved or under-resolved) flow features.
Zhang, Y N
2017-01-01
Parkinson's disease (PD) is primarily diagnosed by clinical examinations, such as walking test, handwriting test, and MRI diagnostic. In this paper, we propose a machine learning based PD telediagnosis method for smartphone. Classification of PD using speech records is a challenging task owing to the fact that the classification accuracy is still lower than doctor-level. Here we demonstrate automatic classification of PD using time frequency features, stacked autoencoders (SAE), and K nearest neighbor (KNN) classifier. KNN classifier can produce promising classification results from useful representations which were learned by SAE. Empirical results show that the proposed method achieves better performance with all tested cases across classification tasks, demonstrating machine learning capable of classifying PD with a level of competence comparable to doctor. It concludes that a smartphone can therefore potentially provide low-cost PD diagnostic care. This paper also gives an implementation on browser/server system and reports the running time cost. Both advantages and disadvantages of the proposed telediagnosis system are discussed.
2017-01-01
Parkinson's disease (PD) is primarily diagnosed by clinical examinations, such as walking test, handwriting test, and MRI diagnostic. In this paper, we propose a machine learning based PD telediagnosis method for smartphone. Classification of PD using speech records is a challenging task owing to the fact that the classification accuracy is still lower than doctor-level. Here we demonstrate automatic classification of PD using time frequency features, stacked autoencoders (SAE), and K nearest neighbor (KNN) classifier. KNN classifier can produce promising classification results from useful representations which were learned by SAE. Empirical results show that the proposed method achieves better performance with all tested cases across classification tasks, demonstrating machine learning capable of classifying PD with a level of competence comparable to doctor. It concludes that a smartphone can therefore potentially provide low-cost PD diagnostic care. This paper also gives an implementation on browser/server system and reports the running time cost. Both advantages and disadvantages of the proposed telediagnosis system are discussed. PMID:29075547
Signal Processing Studies of a Simulated Laser Doppler Velocimetry-Based Acoustic Sensor
1990-10-17
investigated using spectral correlation methods. Results indicate that it may be possible to extend demonstrated LDV-based acoustic sensor sensitivities using higher order processing techniques. (Author)
Olstein, Alan; Griffith, Leena; Feirtag, Joellen; Pearson, Nicole
2013-01-01
The Paradigm Diagnostics Salmonella Indicator Broth (PDX-SIB) is intended as a single-step selective enrichment indicator broth to be used as a simple screening test for the presence of Salmonella spp. in environmental samples. This method permits the end user to avoid multistep sample processing to identify presumptively positive samples, as exemplified by standard U.S. reference methods. PDX-SIB permits the outgrowth of Salmonella while inhibiting the growth of competitive Gram-negative and -positive microflora. Growth of Salmonella-positive cultures results in a visual color change of the medium from purple to yellow when the sample is grown at 37 +/- 1 degree C. Performance of PDX-SIB has been evaluated in five different categories: inclusivity-exclusivity, methods comparison, ruggedness, lot-to-lot variability, and shelf stability. The inclusivity panel included 100 different Salmonella serovars, 98 of which were SIB-positive during the 30 to 48 h incubation period. The exclusivity panel included 33 different non-Salmonella microorganisms, 31 of which were SIB-negative during the incubation period. Methods comparison studies included four different surfaces: S. Newport on plastic, S. Anatum on sealed concrete, S. Abaetetuba on ceramic tile, and S. Typhimurium in the presence of 1 log excess of Citrobacter freundii. Results of the methods comparison studies demonstrated no statistical difference between the SIB method and the U.S. Food and Drug Administration-Bacteriological Analytical Manual reference method, as measured by the Mantel-Haenszel Chi-square test. Ruggedness studies demonstrated little variation in test results when SIB incubation temperatures were varied over a 34-40 degrees C range. Lot-to-lot consistency results suggest no detectable differences in manufactured goods using two reference Salmonella serovars and one non-Salmonella microorganism.
Semi-automated scoring of triple-probe FISH in human sperm using confocal microscopy.
Branch, Francesca; Nguyen, GiaLinh; Porter, Nicholas; Young, Heather A; Martenies, Sheena E; McCray, Nathan; Deloid, Glen; Popratiloff, Anastas; Perry, Melissa J
2017-09-01
Structural and numerical sperm chromosomal aberrations result from abnormal meiosis and are directly linked to infertility. Any live births that arise from aneuploid conceptuses can result in syndromes such as Kleinfelter, Turners, XYY and Edwards. Multi-probe fluorescence in situ hybridization (FISH) is commonly used to study sperm aneuploidy, however manual FISH scoring in sperm samples is labor-intensive and introduces errors. Automated scoring methods are continuously evolving. One challenging aspect for optimizing automated sperm FISH scoring has been the overlap in excitation and emission of the fluorescent probes used to enumerate the chromosomes of interest. Our objective was to demonstrate the feasibility of combining confocal microscopy and spectral imaging with high-throughput methods for accurately measuring sperm aneuploidy. Our approach used confocal microscopy to analyze numerical chromosomal abnormalities in human sperm using enhanced slide preparation and rigorous semi-automated scoring methods. FISH for chromosomes X, Y, and 18 was conducted to determine sex chromosome disomy in sperm nuclei. Application of online spectral linear unmixing was used for effective separation of four fluorochromes while decreasing data acquisition time. Semi-automated image processing, segmentation, classification, and scoring were performed on 10 slides using custom image processing and analysis software and results were compared with manual methods. No significant differences in disomy frequencies were seen between the semi automated and manual methods. Samples treated with pepsin were observed to have reduced background autofluorescence and more uniform distribution of cells. These results demonstrate that semi-automated methods using spectral imaging on a confocal platform are a feasible approach for analyzing numerical chromosomal aberrations in sperm, and are comparable to manual methods. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.
Probabilistic Methods for Structural Reliability and Risk
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2007-01-01
A formal method is described to quantify structural reliability and risk in the presence of a multitude of uncertainties. The method is based on the materials behavior level where primitive variables with their respective scatters are used to describe that behavior. Computational simulation is then used to propagate those uncertainties to the structural scale where reliability and risk are usually specified. A sample case is described to illustrate the effectiveness, versatility, and maturity of the method. Typical results from this method demonstrate that the method is mature and that it can be used for future strategic projections and planning to assure better, cheaper, faster products for competitive advantages in world markets. The results also indicate that the methods are suitable for predicting remaining life in aging or deteriorating structures.
Fan, Chunlin; Deng, Jiewei; Yang, Yunyun; Liu, Junshan; Wang, Ying; Zhang, Xiaoqi; Fai, Kuokchiu; Zhang, Qingwen; Ye, Wencai
2013-10-01
An ultra-performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometry (UPLC-QTOF-MS) method integrating multi-ingredients determination and fingerprint analysis has been established for quality assessment and control of leaves from Ilex latifolia. The method possesses the advantages of speediness, efficiency, accuracy, and allows the multi-ingredients determination and fingerprint analysis in one chromatographic run within 13min. Multi-ingredients determination was performed based on the extracted ion chromatograms of the exact pseudo-molecular ions (with a 0.01Da window), and fingerprint analysis was performed based on the base peak chromatograms, obtained by negative-ion electrospray ionization QTOF-MS. The method validation results demonstrated our developed method possessing desirable specificity, linearity, precision and accuracy. The method was utilized to analyze 22 I. latifolia samples from different origins. The quality assessment was achieved by using both similarity analysis (SA) and principal component analysis (PCA), and the results from SA were consistent with those from PCA. Our experimental results demonstrate that the strategy integrated multi-ingredients determination and fingerprint analysis using UPLC-QTOF-MS technique is a useful approach for rapid pharmaceutical analysis, with promising prospects for the differentiation of origin, the determination of authenticity, and the overall quality assessment of herbal medicines. Copyright © 2013 Elsevier B.V. All rights reserved.
Perfluoro(Methylcyclohexane) Tracer Tagging Test and Demonstration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sigman, M.E.
On February 14 and 15, 2000, a demonstration of current perfluorocarbon tagging technology and the future potential of these methods was held at Oak Ridge National Laboratory (ORNL). The demonstration consisted of a brief technical discussion followed by a laboratory demonstration. The laboratory demonstrations included the detection of letters, parcels, briefcases and lockers containing perfluorocarbon-tagged papers. Discrimination between tagged and non-tagged items and between three perfluorocarbon tags was demonstrated along with the detection of perfluorocarbon in a background of non-fluorinated volatile organic solvent. All demonstrations involved real-time detection using a direct sampling ion trap mass spectrometer. The technical results obtainedmore » at ORNL during and in preparation for the demonstration are presented in Appendix 1 to assist Tracer Detection Technology Corp. in further evaluating their position on development and marketing of perfluorocarbon tracer technology.« less
Aksu, Yaman; Miller, David J; Kesidis, George; Yang, Qing X
2010-05-01
Feature selection for classification in high-dimensional spaces can improve generalization, reduce classifier complexity, and identify important, discriminating feature "markers." For support vector machine (SVM) classification, a widely used technique is recursive feature elimination (RFE). We demonstrate that RFE is not consistent with margin maximization, central to the SVM learning approach. We thus propose explicit margin-based feature elimination (MFE) for SVMs and demonstrate both improved margin and improved generalization, compared with RFE. Moreover, for the case of a nonlinear kernel, we show that RFE assumes that the squared weight vector 2-norm is strictly decreasing as features are eliminated. We demonstrate this is not true for the Gaussian kernel and, consequently, RFE may give poor results in this case. MFE for nonlinear kernels gives better margin and generalization. We also present an extension which achieves further margin gains, by optimizing only two degrees of freedom--the hyperplane's intercept and its squared 2-norm--with the weight vector orientation fixed. We finally introduce an extension that allows margin slackness. We compare against several alternatives, including RFE and a linear programming method that embeds feature selection within the classifier design. On high-dimensional gene microarray data sets, University of California at Irvine (UCI) repository data sets, and Alzheimer's disease brain image data, MFE methods give promising results.
The Efficacy of Thoracic Ultrasonography in Postoperative Newborn Patients after Cardiac Surgery
Ozturk, Erkut; Tanidir, Ibrahim Cansaran; Yildiz, Okan; Ergul, Yakup; Guzeltas, Alper
2017-01-01
Objective In this study, the efficacy of thoracic ultrasonography during echocardiography was evaluated in newborns. Methods Sixty newborns who had undergone pediatric cardiac surgery were successively evaluated between March 1, 2015, and September 1, 2015. Patients were evaluated for effusion, pulmonary atelectasis, and pneumothorax by ultrasonography, and results were compared with X-ray findings. Results Sixty percent (n=42) of the cases were male, the median age was 14 days (2-30 days), and the median body weight was 3.3 kg (2.8-4.5 kg). The median RACHS-1 score was 4 (2-6). Atelectasis was demonstrated in 66% (n=40) of the cases. Five of them were determined solely by X-ray, 10 of them only by ultrasonography, and 25 of them by both ultrasonography and X-ray. Pneumothorax was determined in 20% (n=12) of the cases. Excluding one case determined by both methods, all of the 11 cases were diagnosed by X-ray. Pleural effusion was diagnosed in 26% (n=16) of the cases. Four of the cases were demonstrated solely by ultrasonography, three of them solely by X-ray, and nine of the cases by both methods. Pericardial effusion was demonstrated in 10% (n=6) of the cases. Except for one of the cases determined by both methods, five of the cases were diagnosed by ultrasonography. There was a moderate correlation when all pathologies evaluated together (k=0.51). Conclusion Thoracic ultrasonography might be a beneficial non-invasive method to evaluate postoperative respiratory problems in newborns who had congenital cardiac surgery. PMID:28977200
Preparing and Presenting Effective Research Posters
Miller, Jane E
2007-01-01
Objectives Posters are a common way to present results of a statistical analysis, program evaluation, or other project at professional conferences. Often, researchers fail to recognize the unique nature of the format, which is a hybrid of a published paper and an oral presentation. This methods note demonstrates how to design research posters to convey study objectives, methods, findings, and implications effectively to varied professional audiences. Methods A review of existing literature on research communication and poster design is used to identify and demonstrate important considerations for poster content and layout. Guidelines on how to write about statistical methods, results, and statistical significance are illustrated with samples of ineffective writing annotated to point out weaknesses, accompanied by concrete examples and explanations of improved presentation. A comparison of the content and format of papers, speeches, and posters is also provided. Findings Each component of a research poster about a quantitative analysis should be adapted to the audience and format, with complex statistical results translated into simplified charts, tables, and bulleted text to convey findings as part of a clear, focused story line. Conclusions Effective research posters should be designed around two or three key findings with accompanying handouts and narrative description to supply additional technical detail and encourage dialog with poster viewers. PMID:17355594
Tracking colliding cells in vivo microscopy.
Nguyen, Nhat H; Keller, Steven; Norris, Eric; Huynh, Toan T; Clemens, Mark G; Shin, Min C
2011-08-01
Leukocyte motion represents an important component in the innate immune response to infection. Intravital microscopy is a powerful tool as it enables in vivo imaging of leukocyte motion. Under inflammatory conditions, leukocytes may exhibit various motion behaviors, such as flowing, rolling, and adhering. With many leukocytes moving at a wide range of speeds, collisions occur. These collisions result in abrupt changes in the motion and appearance of leukocytes. Manual analysis is tedious, error prone,time consuming, and could introduce technician-related bias. Automatic tracking is also challenging due to the noise inherent in in vivo images and abrupt changes in motion and appearance due to collision. This paper presents a method to automatically track multiple cells undergoing collisions by modeling the appearance and motion for each collision state and testing collision hypotheses of possible transitions between states. The tracking results are demonstrated using in vivo intravital microscopy image sequences.We demonstrate that 1)71% of colliding cells are correctly tracked; (2) the improvement of the proposed method is enhanced when the duration of collision increases; and (3) given good detection results, the proposed method can correctly track 88% of colliding cells. The method minimizes the tracking failures under collisions and, therefore, allows more robust analysis in the study of leukocyte behaviors responding to inflammatory conditions.
NASA Astrophysics Data System (ADS)
Chu, Jiangtao; Yang, Yue
2018-06-01
Bayesian networks (BN) have many advantages over other methods in ecological modelling and have become an increasingly popular modelling tool. However, BN are flawed in regard to building models based on inadequate existing knowledge. To overcome this limitation, we propose a new method that links BN with structural equation modelling (SEM). In this method, SEM is used to improve the model structure for BN. This method was used to simulate coastal phytoplankton dynamics in Bohai Bay. We demonstrate that this hybrid approach minimizes the need for expert elicitation, generates more reasonable structures for BN models and increases the BN model's accuracy and reliability. These results suggest that the inclusion of SEM for testing and verifying the theoretical structure during the initial construction stage improves the effectiveness of BN models, especially for complex eco-environment systems. The results also demonstrate that in Bohai Bay, while phytoplankton biomass has the greatest influence on phytoplankton dynamics, the impact of nutrients on phytoplankton dynamics is larger than the influence of the physical environment in summer. Furthermore, despite the Redfield ratio indicating that phosphorus should be the primary nutrient limiting factor, our results indicate that silicate plays the most important role in regulating phytoplankton dynamics in Bohai Bay.
Small UAV Automatic Ground Collision Avoidance System Design Considerations and Flight Test Results
NASA Technical Reports Server (NTRS)
Sorokowski, Paul; Skoog, Mark; Burrows, Scott; Thomas, SaraKatie
2015-01-01
The National Aeronautics and Space Administration (NASA) Armstrong Flight Research Center Small Unmanned Aerial Vehicle (SUAV) Automatic Ground Collision Avoidance System (Auto GCAS) project demonstrated several important collision avoidance technologies. First, the SUAV Auto GCAS design included capabilities to take advantage of terrain avoidance maneuvers flying turns to either side as well as straight over terrain. Second, the design also included innovative digital elevation model (DEM) scanning methods. The combination of multi-trajectory options and new scanning methods demonstrated the ability to reduce the nuisance potential of the SUAV while maintaining robust terrain avoidance. Third, the Auto GCAS algorithms were hosted on the processor inside a smartphone, providing a lightweight hardware configuration for use in either the ground control station or on board the test aircraft. Finally, compression of DEM data for the entire Earth and successful hosting of that data on the smartphone was demonstrated. The SUAV Auto GCAS project demonstrated that together these methods and technologies have the potential to dramatically reduce the number of controlled flight into terrain mishaps across a wide range of aviation platforms with similar capabilities including UAVs, general aviation aircraft, helicopters, and model aircraft.
Rao, Jinmeng; Qiao, Yanjun; Ren, Fu; Wang, Junxing; Du, Qingyun
2017-01-01
The purpose of this study was to develop a robust, fast and markerless mobile augmented reality method for registration, geovisualization and interaction in uncontrolled outdoor environments. We propose a lightweight deep-learning-based object detection approach for mobile or embedded devices; the vision-based detection results of this approach are combined with spatial relationships by means of the host device’s built-in Global Positioning System receiver, Inertial Measurement Unit and magnetometer. Virtual objects generated based on geospatial information are precisely registered in the real world, and an interaction method based on touch gestures is implemented. The entire method is independent of the network to ensure robustness to poor signal conditions. A prototype system was developed and tested on the Wuhan University campus to evaluate the method and validate its results. The findings demonstrate that our method achieves a high detection accuracy, stable geovisualization results and interaction. PMID:28837096
Rao, Jinmeng; Qiao, Yanjun; Ren, Fu; Wang, Junxing; Du, Qingyun
2017-08-24
The purpose of this study was to develop a robust, fast and markerless mobile augmented reality method for registration, geovisualization and interaction in uncontrolled outdoor environments. We propose a lightweight deep-learning-based object detection approach for mobile or embedded devices; the vision-based detection results of this approach are combined with spatial relationships by means of the host device's built-in Global Positioning System receiver, Inertial Measurement Unit and magnetometer. Virtual objects generated based on geospatial information are precisely registered in the real world, and an interaction method based on touch gestures is implemented. The entire method is independent of the network to ensure robustness to poor signal conditions. A prototype system was developed and tested on the Wuhan University campus to evaluate the method and validate its results. The findings demonstrate that our method achieves a high detection accuracy, stable geovisualization results and interaction.
Application of software technology to automatic test data analysis
NASA Technical Reports Server (NTRS)
Stagner, J. R.
1991-01-01
The verification process for a major software subsystem was partially automated as part of a feasibility demonstration. The methods employed are generally useful and applicable to other types of subsystems. The effort resulted in substantial savings in test engineer analysis time and offers a method for inclusion of automatic verification as a part of regression testing.
NASA Astrophysics Data System (ADS)
Mege, D.
1999-03-01
Field data and length/displacement scaling laws applied to the Yakima fold belt on the Columbia Plateau are used to demonstrate a method for estimating surface shortening of wrinkle ridge areas. Application to martian wrinkle ridges is given in another abstract.
Mixed-Methods Resistance Training Increases Power and Strength of Young and Older Men.
ERIC Educational Resources Information Center
Newton, Robert U.; Hakkinen, Keijo; Hakkinen, Arja; McCormick, Matt; Volek, Jeff; Kraemer, William J.
2002-01-01
Examined the effects of a 10-week, mixed-methods resistance training program on young and older men. Although results confirmed some age-related reductions in muscle strength and power, the older men demonstrated similar capacity to the younger men for increases in muscle strength and power via an appropriate, periodized resistance training…
Application of a Method of Estimating DIF for Polytomous Test Items.
ERIC Educational Resources Information Center
Camilli, Gregory; Congdon, Peter
1999-01-01
Demonstrates a method for studying differential item functioning (DIF) that can be used with dichotomous or polytomous items and that is valid for data that follow a partial credit Item Response Theory model. A simulation study shows that positively biased Type I error rates are in accord with results from previous studies. (SLD)
Systematic Convergence in Applying Variational Method to Double-Well Potential
ERIC Educational Resources Information Center
Mei, Wai-Ning
2016-01-01
In this work, we demonstrate the application of the variational method by computing the ground- and first-excited state energies of a double-well potential. We start with the proper choice of the trial wave functions using optimized parameters, and notice that accurate expectation values in excellent agreement with the numerical results can be…
USDA-ARS?s Scientific Manuscript database
In a study of comparability of total water contents (%) of conditioned cottons by Karl Fischer Titration (KFT) and Low Temperature Distillation (LTD) reference methods, we demonstrated a match of averaged results based on a large number of replications and weighing the test specimens at the same tim...
Solar-Assisted Oxidation of Toxic Cyanide
NASA Technical Reports Server (NTRS)
Byvik, C. E.; Miles, A.
1985-01-01
In solar-assisted oxidation technique, oxygen-bearing air bubbled through cyanide solution in which platinized powdered TiO2 is suspended. Light from either artifical source or natural Sunlight irradiates. Experiments demonstrated this technique effective in reducing concentration of cyanide to levels well below those achieved by other methods. Results suggest effective and inexpensive method for oxidizing cyanide in industrial wastewaters.
Probabilistic composite micromechanics
NASA Technical Reports Server (NTRS)
Stock, T. A.; Bellini, P. X.; Murthy, P. L. N.; Chamis, C. C.
1988-01-01
Probabilistic composite micromechanics methods are developed that simulate expected uncertainties in unidirectional fiber composite properties. These methods are in the form of computational procedures using Monte Carlo simulation. A graphite/epoxy unidirectional composite (ply) is studied to demonstrate fiber composite material properties at the micro level. Regression results are presented to show the relative correlation between predicted and response variables in the study.
Measuring Student Scholastic Effort: An Economic Theory of Learning Approach.
ERIC Educational Resources Information Center
Wetzel, James N.
Many research studies which deal with the teaching of economics at the college level conclude that different teaching methods do not lead to very different results in terms of student achievement. This paper suggests that one reason student achievement may fail to demonstrate the superiority of one teaching method over another is that achievement…
Laskin, Julia [Richland, WA; Futrell, Jean H [Richland, WA
2008-04-29
The invention relates to a method and apparatus for enhanced sequencing of complex molecules using surface-induced dissociation (SID) in conjunction with mass spectrometric analysis. Results demonstrate formation of a wide distribution of structure-specific fragments having wide sequence coverage useful for sequencing and identifying the complex molecules.
Simulation Study of Effects of the Blind Deconvolution on Ultrasound Image
NASA Astrophysics Data System (ADS)
He, Xingwu; You, Junchen
2018-03-01
Ultrasonic image restoration is an essential subject in Medical Ultrasound Imaging. However, without enough and precise system knowledge, some traditional image restoration methods based on the system prior knowledge often fail to improve the image quality. In this paper, we use the simulated ultrasound image to find the effectiveness of the blind deconvolution method for ultrasound image restoration. Experimental results demonstrate that the blind deconvolution method can be applied to the ultrasound image restoration and achieve the satisfactory restoration results without the precise prior knowledge, compared with the traditional image restoration method. And with the inaccurate small initial PSF, the results shows blind deconvolution could improve the overall image quality of ultrasound images, like much better SNR and image resolution, and also show the time consumption of these methods. it has no significant increasing on GPU platform.
An AIS-Based E-mail Classification Method
NASA Astrophysics Data System (ADS)
Qing, Jinjian; Mao, Ruilong; Bie, Rongfang; Gao, Xiao-Zhi
This paper proposes a new e-mail classification method based on the Artificial Immune System (AIS), which is endowed with good diversity and self-adaptive ability by using the immune learning, immune memory, and immune recognition. In our method, the features of spam and non-spam extracted from the training sets are combined together, and the number of false positives (non-spam messages that are incorrectly classified as spam) can be reduced. The experimental results demonstrate that this method is effective in reducing the false rate.
Ghanbari, Behzad
2014-01-01
We aim to study the convergence of the homotopy analysis method (HAM in short) for solving special nonlinear Volterra-Fredholm integrodifferential equations. The sufficient condition for the convergence of the method is briefly addressed. Some illustrative examples are also presented to demonstrate the validity and applicability of the technique. Comparison of the obtained results HAM with exact solution shows that the method is reliable and capable of providing analytic treatment for solving such equations.
Inverse problems and optimal experiment design in unsteady heat transfer processes identification
NASA Technical Reports Server (NTRS)
Artyukhin, Eugene A.
1991-01-01
Experimental-computational methods for estimating characteristics of unsteady heat transfer processes are analyzed. The methods are based on the principles of distributed parameter system identification. The theoretical basis of such methods is the numerical solution of nonlinear ill-posed inverse heat transfer problems and optimal experiment design problems. Numerical techniques for solving problems are briefly reviewed. The results of the practical application of identification methods are demonstrated when estimating effective thermophysical characteristics of composite materials and thermal contact resistance in two-layer systems.
Numerical solution of second order ODE directly by two point block backward differentiation formula
NASA Astrophysics Data System (ADS)
Zainuddin, Nooraini; Ibrahim, Zarina Bibi; Othman, Khairil Iskandar; Suleiman, Mohamed; Jamaludin, Noraini
2015-12-01
Direct Two Point Block Backward Differentiation Formula, (BBDF2) for solving second order ordinary differential equations (ODEs) will be presented throughout this paper. The method is derived by differentiating the interpolating polynomial using three back values. In BBDF2, two approximate solutions are produced simultaneously at each step of integration. The method derived is implemented by using fixed step size and the numerical results that follow demonstrate the advantage of the direct method as compared to the reduction method.
Three optical methods for remotely measuring aerosol size distributions.
NASA Technical Reports Server (NTRS)
Reagan, J. A.; Herman, B. M.
1971-01-01
Three optical probing methods for remotely measuring atmospheric aerosol size distributions are discussed and contrasted. The particular detection methods which are considered make use of monostatic lidar (laser radar), bistatic lidar, and solar radiometer sensing techniques. The theory of each of these measurement techniques is discussed briefly, and the necessary constraints which must be applied to obtain aerosol size distribution information from such measurements are pointed out. Theoretical and/or experimental results are also presented which demonstrate the utility of the three proposed probing methods.
Infrared target recognition based on improved joint local ternary pattern
NASA Astrophysics Data System (ADS)
Sun, Junding; Wu, Xiaosheng
2016-05-01
This paper presents a simple, efficient, yet robust approach, named joint orthogonal combination of local ternary pattern, for automatic forward-looking infrared target recognition. It gives more advantages to describe the macroscopic textures and microscopic textures by fusing variety of scales than the traditional LBP-based methods. In addition, it can effectively reduce the feature dimensionality. Further, the rotation invariant and uniform scheme, the robust LTP, and soft concave-convex partition are introduced to enhance its discriminative power. Experimental results demonstrate that the proposed method can achieve competitive results compared with the state-of-the-art methods.
NASA Technical Reports Server (NTRS)
Thacker, B. H.; Mcclung, R. C.; Millwater, H. R.
1990-01-01
An eigenvalue analysis of a typical space propulsion system turbopump blade is presented using an approximate probabilistic analysis methodology. The methodology was developed originally to investigate the feasibility of computing probabilistic structural response using closed-form approximate models. This paper extends the methodology to structures for which simple closed-form solutions do not exist. The finite element method will be used for this demonstration, but the concepts apply to any numerical method. The results agree with detailed analysis results and indicate the usefulness of using a probabilistic approximate analysis in determining efficient solution strategies.
Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B
2018-06-01
To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
An improved parallel fuzzy connected image segmentation method based on CUDA.
Wang, Liansheng; Li, Dong; Huang, Shaohui
2016-05-12
Fuzzy connectedness method (FC) is an effective method for extracting fuzzy objects from medical images. However, when FC is applied to large medical image datasets, its running time will be greatly expensive. Therefore, a parallel CUDA version of FC (CUDA-kFOE) was proposed by Ying et al. to accelerate the original FC. Unfortunately, CUDA-kFOE does not consider the edges between GPU blocks, which causes miscalculation of edge points. In this paper, an improved algorithm is proposed by adding a correction step on the edge points. The improved algorithm can greatly enhance the calculation accuracy. In the improved method, an iterative manner is applied. In the first iteration, the affinity computation strategy is changed and a look up table is employed for memory reduction. In the second iteration, the error voxels because of asynchronism are updated again. Three different CT sequences of hepatic vascular with different sizes were used in the experiments with three different seeds. NVIDIA Tesla C2075 is used to evaluate our improved method over these three data sets. Experimental results show that the improved algorithm can achieve a faster segmentation compared to the CPU version and higher accuracy than CUDA-kFOE. The calculation results were consistent with the CPU version, which demonstrates that it corrects the edge point calculation error of the original CUDA-kFOE. The proposed method has a comparable time cost and has less errors compared to the original CUDA-kFOE as demonstrated in the experimental results. In the future, we will focus on automatic acquisition method and automatic processing.
Demonstrating Hemostasis with a Student-Designed Prothrombin Time Test
ERIC Educational Resources Information Center
Fardy, Richard Wiley
1978-01-01
Describes a blood coagulation test developed by two high school biology students. Although the test lacks some precision, results indicate that the technique is comparable to standard methods used in laboratories. (MA)
Analysis and optimization of the active rigidity joint
NASA Astrophysics Data System (ADS)
Manzo, Justin; Garcia, Ephrahim
2009-12-01
The active rigidity joint is a composite mechanism using shape memory alloy and shape memory polymer to create a passively rigid joint with thermally activated deflection. A new model for the active rigidity joint relaxes constraints of earlier methods and allows for more accurate deflection predictions compared to finite element results. Using an iterative process to determine the strain distribution and deflection, the method demonstrates accurate results for both surface bonded and embedded actuators with and without external loading. Deflection capabilities are explored through simulated annealing heuristic optimization using a variety of cost functions to explore actuator performance. A family of responses presents actuator characteristics in terms of load bearing and deflection capabilities given material and thermal constraints. Optimization greatly expands the available workspace of the active rigidity joint from the initial configuration, demonstrating specific work capabilities comparable to those of muscle tissue.
Stege, Patricia W; Sombra, Lorena L; Messina, Germán; Martinez, Luis D; Silva, María F
2010-07-01
The finding of melatonin, the often called "hormone of darkness" in plants opens an interesting perspective associated to the plethora of health benefits related to the moderate consumption of red wine. In this study, the implementation of a new method for the determination of melatonin in complex food matrices by CEC with immobilized carboxylic multi-walled carbon nanotubes as stationary phase is demonstrated. The results indicated high electrochromatographic resolution, good capillary efficiencies and improved sensitivity respect to those obtained with conventional capillaries. In addition, it was demonstrated highly reproducible results between runs, days and columns. The LOD for melatonin was 0.01 ng/mL. The method was successfully applied to the determination of melatonin in red and white wine, grape skin and plant extracts of Salvia officinalis L.
NASA Astrophysics Data System (ADS)
Gurrala, Praveen; Downs, Andrew; Chen, Kun; Song, Jiming; Roberts, Ron
2018-04-01
Full wave scattering models for ultrasonic waves are necessary for the accurate prediction of voltage signals received from complex defects/flaws in practical nondestructive evaluation (NDE) measurements. We propose the high-order Nyström method accelerated by the multilevel fast multipole algorithm (MLFMA) as an improvement to the state-of-the-art full-wave scattering models that are based on boundary integral equations. We present numerical results demonstrating improvements in simulation time and memory requirement. Particularly, we demonstrate the need for higher order geom-etry and field approximation in modeling NDE measurements. Also, we illustrate the importance of full-wave scattering models using experimental pulse-echo data from a spherical inclusion in a solid, which cannot be modeled accurately by approximation-based scattering models such as the Kirchhoff approximation.
Bell, Melanie L; Horton, Nicholas J; Dhillon, Haryana M; Bray, Victoria J; Vardy, Janette
2018-05-26
Patient reported outcomes (PROs) are important in oncology research; however, missing data can pose a threat to the validity of results. Psycho-oncology researchers should be aware of the statistical options for handling missing data robustly. One rarely used set of methods, which includes extensions for handling missing data, is generalized estimating equations (GEEs). Our objective was to demonstrate use of GEEs to analyze PROs with missing data in randomized trials with assessments at fixed time points. We introduce GEEs and show, with a worked example, how to use GEEs that account for missing data: inverse probability weighted GEEs and multiple imputation with GEE. We use data from an RCT evaluating a web-based brain training for cancer survivors reporting cognitive symptoms after chemotherapy treatment. The primary outcome for this demonstration is the binary outcome of cognitive impairment. Several methods are used, and results are compared. We demonstrate that estimates can vary depending on the choice of analytical approach, with odds ratios for no cognitive impairment ranging from 2.04 to 5.74. While most of these estimates were statistically significant (P < 0.05), a few were not. Researchers using PROs should use statistical methods that handle missing data in a way as to result in unbiased estimates. GEE extensions are analytic options for handling dropouts in longitudinal RCTs, particularly if the outcome is not continuous. Copyright © 2018 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dehoff, Ryan R.; List, III, Frederick Alyious; Carver, Keith
ORNL Manufacturing Demonstration Facility worked with ECM Technologies LLC to investigate the use of precision electro-chemical machining technology to polish the surface of parts created by Arcam electron beam melting. The goals for phase one of this project have been met. The project goal was to determine whether electro-chemical machining is a viable method to improve the surface finish of Inconel 718 parts fabricated using the Arcam EBM method. The project partner (ECM) demonstrated viability for parts of both simple and complex geometry. During the course of the project, detailed process knowledge was generated. This project has resulted in themore » expansion of United States operations for ECM Technologies.« less
NASA Technical Reports Server (NTRS)
Reichelt, Mark
1993-01-01
In this paper we describe a novel generalized SOR (successive overrelaxation) algorithm for accelerating the convergence of the dynamic iteration method known as waveform relaxation. A new convolution SOR algorithm is presented, along with a theorem for determining the optimal convolution SOR parameter. Both analytic and experimental results are given to demonstrate that the convergence of the convolution SOR algorithm is substantially faster than that of the more obvious frequency-independent waveform SOR algorithm. Finally, to demonstrate the general applicability of this new method, it is used to solve the differential-algebraic system generated by spatial discretization of the time-dependent semiconductor device equations.
Face Hallucination with Linear Regression Model in Semi-Orthogonal Multilinear PCA Method
NASA Astrophysics Data System (ADS)
Asavaskulkiet, Krissada
2018-04-01
In this paper, we propose a new face hallucination technique, face images reconstruction in HSV color space with a semi-orthogonal multilinear principal component analysis method. This novel hallucination technique can perform directly from tensors via tensor-to-vector projection by imposing the orthogonality constraint in only one mode. In our experiments, we use facial images from FERET database to test our hallucination approach which is demonstrated by extensive experiments with high-quality hallucinated color faces. The experimental results assure clearly demonstrated that we can generate photorealistic color face images by using the SO-MPCA subspace with a linear regression model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Hongwei; High Magnetic Field Laboratory, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031; Kong Xi
The method of quantum annealing (QA) is a promising way for solving many optimization problems in both classical and quantum information theory. The main advantage of this approach, compared with the gate model, is the robustness of the operations against errors originated from both external controls and the environment. In this work, we succeed in demonstrating experimentally an application of the method of QA to a simplified version of the traveling salesman problem by simulating the corresponding Schroedinger evolution with a NMR quantum simulator. The experimental results unambiguously yielded the optimal traveling route, in good agreement with the theoretical prediction.
Sun, Zong-ke; Wu, Rong; Ding, Pei; Xue, Jin-Rong
2006-07-01
To compare between rapid detection method of enzyme substrate technique and multiple-tube fermentation technique in water coliform bacteria detection. Using inoculated and real water samples to compare the equivalence and false positive rate between two methods. Results demonstrate that enzyme substrate technique shows equivalence with multiple-tube fermentation technique (P = 0.059), false positive rate between the two methods has no statistical difference. It is suggested that enzyme substrate technique can be used as a standard method for water microbiological safety evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bučko, Tomáš, E-mail: bucko@fns.uniba.sk; Department of Computational Materials Physics, Fakultät für Physik and Center for Computational Materials Science, Universität Wien, Sensengasse, Wien 1090; Lebègue, Sébastien, E-mail: sebastien.lebegue@univ-lorraine.fr
2014-07-21
Recently we have demonstrated that the applicability of the Tkatchenko-Scheffler (TS) method for calculating dispersion corrections to density-functional theory can be extended to ionic systems if the Hirshfeld method for estimating effective volumes and charges of atoms in molecules or solids (AIM’s) is replaced by its iterative variant [T. Bučko, S. Lebègue, J. Hafner, and J. Ángyán, J. Chem. Theory Comput. 9, 4293 (2013)]. The standard Hirshfeld method uses neutral atoms as a reference, whereas in the iterative Hirshfeld (HI) scheme the fractionally charged atomic reference states are determined self-consistently. We show that the HI method predicts more realistic AIMmore » charges and that the TS/HI approach leads to polarizabilities and C{sub 6} dispersion coefficients in ionic or partially ionic systems which are, as expected, larger for anions than for cations (in contrast to the conventional TS method). For crystalline materials, the new algorithm predicts polarizabilities per unit cell in better agreement with the values derived from the Clausius-Mosotti equation. The applicability of the TS/HI method has been tested for a wide variety of molecular and solid-state systems. It is demonstrated that for systems dominated by covalent interactions and/or dispersion forces the TS/HI method leads to the same results as the conventional TS approach. The difference between the TS/HI and TS approaches increases with increasing ionicity. A detailed comparison is presented for isoelectronic series of octet compounds, layered crystals, complex intermetallic compounds, and hydrides, and for crystals built of molecules or containing molecular anions. It is demonstrated that only the TS/HI method leads to accurate results for systems where both electrostatic and dispersion interactions are important, as illustrated for Li-intercalated graphite and for molecular adsorption on the surfaces in ionic solids and in the cavities of zeolites.« less
NASA Technical Reports Server (NTRS)
Hill, Geoffrey A.; Olson, Erik D.
2004-01-01
Due to the growing problem of noise in today's air transportation system, there have arisen needs to incorporate noise considerations in the conceptual design of revolutionary aircraft. Through the use of response surfaces, complex noise models may be converted into polynomial equations for rapid and simplified evaluation. This conversion allows many of the commonly used response surface-based trade space exploration methods to be applied to noise analysis. This methodology is demonstrated using a noise model of a notional 300 passenger Blended-Wing-Body (BWB) transport. Response surfaces are created relating source noise levels of the BWB vehicle to its corresponding FAR-36 certification noise levels and the resulting trade space is explored. Methods demonstrated include: single point analysis, parametric study, an optimization technique for inverse analysis, sensitivity studies, and probabilistic analysis. Extended applications of response surface-based methods in noise analysis are also discussed.
Managing for resilience: an information theory-based ...
Ecosystems are complex and multivariate; hence, methods to assess the dynamics of ecosystems should have the capacity to evaluate multiple indicators simultaneously. Most research on identifying leading indicators of regime shifts has focused on univariate methods and simple models which have limited utility when evaluating real ecosystems, particularly because drivers are often unknown. We discuss some common univariate and multivariate approaches for detecting critical transitions in ecosystems and demonstrate their capabilities via case studies. Synthesis and applications. We illustrate the utility of an information theory-based index for assessing ecosystem dynamics. Trends in this index also provide a sentinel of both abrupt and gradual transitions in ecosystems. In response to the need to identify leading indicators of regime shifts in ecosystems, our research compares traditional indicators and Fisher information, an information theory based method, by examining four case study systems. Results demonstrate the utility of methods and offers great promise for quantifying and managing for resilience.
Hansen, Bjoern Oest; Meyer, Etienne H; Ferrari, Camilla; Vaid, Neha; Movahedi, Sara; Vandepoele, Klaas; Nikoloski, Zoran; Mutwil, Marek
2018-03-01
Recent advances in gene function prediction rely on ensemble approaches that integrate results from multiple inference methods to produce superior predictions. Yet, these developments remain largely unexplored in plants. We have explored and compared two methods to integrate 10 gene co-function networks for Arabidopsis thaliana and demonstrate how the integration of these networks produces more accurate gene function predictions for a larger fraction of genes with unknown function. These predictions were used to identify genes involved in mitochondrial complex I formation, and for five of them, we confirmed the predictions experimentally. The ensemble predictions are provided as a user-friendly online database, EnsembleNet. The methods presented here demonstrate that ensemble gene function prediction is a powerful method to boost prediction performance, whereas the EnsembleNet database provides a cutting-edge community tool to guide experimentalists. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.
A new web-based framework development for fuzzy multi-criteria group decision-making.
Hanine, Mohamed; Boutkhoum, Omar; Tikniouine, Abdessadek; Agouti, Tarik
2016-01-01
Fuzzy multi-criteria group decision making (FMCGDM) process is usually used when a group of decision-makers faces imprecise data or linguistic variables to solve the problems. However, this process contains many methods that require many time-consuming calculations depending on the number of criteria, alternatives and decision-makers in order to reach the optimal solution. In this study, a web-based FMCGDM framework that offers decision-makers a fast and reliable response service is proposed. The proposed framework includes commonly used tools for multi-criteria decision-making problems such as fuzzy Delphi, fuzzy AHP and fuzzy TOPSIS methods. The integration of these methods enables taking advantages of the strengths and complements each method's weakness. Finally, a case study of location selection for landfill waste in Morocco is performed to demonstrate how this framework can facilitate decision-making process. The results demonstrate that the proposed framework can successfully accomplish the goal of this study.
SLDAssay: A software package and web tool for analyzing limiting dilution assays.
Trumble, Ilana M; Allmon, Andrew G; Archin, Nancie M; Rigdon, Joseph; Francis, Owen; Baldoni, Pedro L; Hudgens, Michael G
2017-11-01
Serial limiting dilution (SLD) assays are used in many areas of infectious disease related research. This paper presents SLDAssay, a free and publicly available R software package and web tool for analyzing data from SLD assays. SLDAssay computes the maximum likelihood estimate (MLE) for the concentration of target cells, with corresponding exact and asymptotic confidence intervals. Exact and asymptotic goodness of fit p-values, and a bias-corrected (BC) MLE are also provided. No other publicly available software currently implements the BC MLE or the exact methods. For validation of SLDAssay, results from Myers et al. (1994) are replicated. Simulations demonstrate the BC MLE is less biased than the MLE. Additionally, simulations demonstrate that exact methods tend to give better confidence interval coverage and goodness-of-fit tests with lower type I error than the asymptotic methods. Additional advantages of using exact methods are also discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
A real-time interferometer technique for compressible flow research
NASA Technical Reports Server (NTRS)
Bachalo, W. D.; Houser, M. J.
1984-01-01
Strengths and shortcomings in the application of interferometric techniques to transonic flow fields are examined and an improved method is elaborated. Such applications have demonstrated the value of interferometry in obtaining data for compressible flow research. With holographic techniques, interferometry may be applied in large scale facilities without the use of expensive optics or elaborate vibration isolation equipment. Results obtained using holographic interferometry and other methods demonstrate that reliable qualitative and quantitative data can be acquired. Nevertheless, the conventional method can be difficult to set up and apply, and it cannot produce real-time data. A new interferometry technique is investigated that promises to be easier to apply and can provide real-time information. This single-beam technique has the necessary insensitivity to vibration for large scale wind tunnel operations. Capabilities of the method and preliminary tests on some laboratory scale flow fluids are described.
Quantitative bioanalysis of strontium in human serum by inductively coupled plasma-mass spectrometry
Somarouthu, Srikanth; Ohh, Jayoung; Shaked, Jonathan; Cunico, Robert L; Yakatan, Gerald; Corritori, Suzana; Tami, Joe; Foehr, Erik D
2015-01-01
Aim: A bioanalytical method using inductively-coupled plasma-mass spectrometry to measure endogenous levels of strontium in human serum was developed and validated. Results & methodology: This article details the experimental procedures used for the method development and validation thus demonstrating the application of the inductively-coupled plasma-mass spectrometry method for quantification of strontium in human serum samples. The assay was validated for specificity, linearity, accuracy, precision, recovery and stability. Significant endogenous levels of strontium are present in human serum samples ranging from 19 to 96 ng/ml with a mean of 34.6 ± 15.2 ng/ml (SD). Discussion & conclusion: Calibration procedures and sample pretreatment were simplified for high throughput analysis. The validation demonstrates that the method was sensitive, selective for quantification of strontium (88Sr) and is suitable for routine clinical testing of strontium in human serum samples. PMID:28031925
A sequential linear optimization approach for controller design
NASA Technical Reports Server (NTRS)
Horta, L. G.; Juang, J.-N.; Junkins, J. L.
1985-01-01
A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.
Effects of Individual Development Accounts (IDAs) on Household Wealth and Saving Taste
ERIC Educational Resources Information Center
Huang, Jin
2010-01-01
This study examines effects of individual development accounts (IDAs) on household wealth of low-income participants. Methods: This study uses longitudinal survey data from the American Dream Demonstration (ADD) involving experimental design (treatment group = 537, control group = 566). Results: Results from quantile regression analysis indicate…
Prediction Interval Development for Wind-Tunnel Balance Check-Loading
NASA Technical Reports Server (NTRS)
Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.
2014-01-01
Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.
Dynamic Modeling from Flight Data with Unknown Time Skews
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2016-01-01
A method for estimating dynamic model parameters from flight data with unknown time skews is described and demonstrated. The method combines data reconstruction, nonlinear optimization, and equation-error parameter estimation in the frequency domain to accurately estimate both dynamic model parameters and the relative time skews in the data. Data from a nonlinear F-16 aircraft simulation with realistic noise, instrumentation errors, and arbitrary time skews were used to demonstrate the approach. The approach was further evaluated using flight data from a subscale jet transport aircraft, where the measured data were known to have relative time skews. Comparison of modeling results obtained from time-skewed and time-synchronized data showed that the method accurately estimates both dynamic model parameters and relative time skew parameters from flight data with unknown time skews.
NASA Astrophysics Data System (ADS)
Gabai, Haniel; Baranes-Zeevi, Maya; Zilberman, Meital; Shaked, Natan T.
2013-04-01
We propose an off-axis interferometric imaging system as a simple and unique modality for continuous, non-contact and non-invasive wide-field imaging and characterization of drug release from its polymeric device used in biomedicine. In contrast to the current gold-standard methods in this field, usually based on chromatographic and spectroscopic techniques, our method requires no user intervention during the experiment, and only one test-tube is prepared. We experimentally demonstrate imaging and characterization of drug release from soy-based protein matrix, used as skin equivalent for wound dressing with controlled anesthetic, Bupivacaine drug release. Our preliminary results demonstrate the high potential of our method as a simple and low-cost modality for wide-field imaging and characterization of drug release from drug delivery devices.
Functionalization of carbon nanotubes: Characterization, modeling and composite applications
NASA Astrophysics Data System (ADS)
Wang, Shiren
Carbon nanotubes have demonstrated exceptional mechanical, thermal and electrical properties, and are regarded as one of the most promising reinforcement materials for the next generation of high performance structural and multifunctional composites. However, to date, most application attempts have been hindered by several technical roadblocks, such as poor dispersion and weak interfacial bonding. In this dissertation, several innovative functionalization methods were proposed, studied to overcome these technical issues in order to realize the full potential of nanotubes as reinforcement. These functionalization methods included precision sectioning of nanotubes using an ultra-microtome, electron-beam irradiation, amino and epoxide group grafting. The characterization results of atomic force microscope, transmission electronic microscope and Raman suggested that aligned carbon nanotubes can be precisely sectioned with controlled length and minimum sidewall damage. This study also designed and demonstrated new covalent functionalization approaches through unique epoxy-grafting and one-step amino-grafting, which have potential of scale-up for composite applications. In addition, the dissertation also successfully tailored the structure and properties of the thin nanotube film through electron beam irradiation. Significant improvement of both mechanical and electrical conducting properties of the irradiated nanotube films or buckypapers was achieved. All these methods demonstrated effectiveness in improving dispersion and interfacial bonding in the epoxy resin, resulting in considerable improvements in composite mechanical properties. Modeling of functionalization methods also provided further understanding and offered the reasonable explanations of SWNTs length distribution as well as carbon nanostructure transformation upon electron-beam irradiation. Both experimental and modeling results provide important foundations for the further comprehensively investigation of nanotube functionalization, and hence facilitate realization of the full potential of nanotube-reinforced nanocomposites.
Numerical solution of the electron transport equation
NASA Astrophysics Data System (ADS)
Woods, Mark
The electron transport equation has been solved many times for a variety of reasons. The main difficulty in its numerical solution is that it is a very stiff boundary value problem. The most common numerical methods for solving boundary value problems are symmetric collocation methods and shooting methods. Both of these types of methods can only be applied to the electron transport equation if the boundary conditions are altered with unrealistic assumptions because they require too many points to be practical. Further, they result in oscillating and negative solutions, which are physically meaningless for the problem at hand. For these reasons, all numerical methods for this problem to date are a bit unusual because they were designed to try and avoid the problem of extreme stiffness. This dissertation shows that there is no need to introduce spurious boundary conditions or invent other numerical methods for the electron transport equation. Rather, there already exists methods for very stiff boundary value problems within the numerical analysis literature. We demonstrate one such method in which the fast and slow modes of the boundary value problem are essentially decoupled. This allows for an upwind finite difference method to be applied to each mode as is appropriate. This greatly reduces the number of points needed in the mesh, and we demonstrate how this eliminates the need to define new boundary conditions. This method is verified by showing that under certain restrictive assumptions, the electron transport equation has an exact solution that can be written as an integral. We show that the solution from the upwind method agrees with the quadrature evaluation of the exact solution. This serves to verify that the upwind method is properly solving the electron transport equation. Further, it is demonstrated that the output of the upwind method can be used to compute auroral light emissions.
Analysis of biomolecular solvation sites by 3D-RISM theory.
Sindhikara, Daniel J; Hirata, Fumio
2013-06-06
We derive, implement, and apply equilibrium solvation site analysis for biomolecules. Our method utilizes 3D-RISM calculations to quickly obtain equilibrium solvent distributions without either necessity of simulation or limits of solvent sampling. Our analysis of these distributions extracts highest likelihood poses of solvent as well as localized entropies, enthalpies, and solvation free energies. We demonstrate our method on a structure of HIV-1 protease where excellent structural and thermodynamic data are available for comparison. Our results, obtained within minutes, show systematic agreement with available experimental data. Further, our results are in good agreement with established simulation-based solvent analysis methods. This method can be used not only for visual analysis of active site solvation but also for virtual screening methods and experimental refinement.
The WOMBAT Attack Attribution Method: Some Results
NASA Astrophysics Data System (ADS)
Dacier, Marc; Pham, Van-Hau; Thonnard, Olivier
In this paper, we present a new attack attribution method that has been developed within the WOMBAT project. We illustrate the method with some real-world results obtained when applying it to almost two years of attack traces collected by low interaction honeypots. This analytical method aims at identifying large scale attack phenomena composed of IP sources that are linked to the same root cause. All malicious sources involved in a same phenomenon constitute what we call a Misbehaving Cloud (MC). The paper offers an overview of the various steps the method goes through to identify these clouds, providing pointers to external references for more detailed information. Four instances of misbehaving clouds are then described in some more depth to demonstrate the meaningfulness of the concept.
Scene-based nonuniformity correction with reduced ghosting using a gated LMS algorithm.
Hardie, Russell C; Baxley, Frank; Brys, Brandon; Hytla, Patrick
2009-08-17
In this paper, we present a scene-based nouniformity correction (NUC) method using a modified adaptive least mean square (LMS) algorithm with a novel gating operation on the updates. The gating is designed to significantly reduce ghosting artifacts produced by many scene-based NUC algorithms by halting updates when temporal variation is lacking. We define the algorithm and present a number of experimental results to demonstrate the efficacy of the proposed method in comparison to several previously published methods including other LMS and constant statistics based methods. The experimental results include simulated imagery and a real infrared image sequence. We show that the proposed method significantly reduces ghosting artifacts, but has a slightly longer convergence time. (c) 2009 Optical Society of America
Methods for Synthesizing Findings on Moderation Effects Across Multiple Randomized Trials
Brown, C Hendricks; Sloboda, Zili; Faggiano, Fabrizio; Teasdale, Brent; Keller, Ferdinand; Burkhart, Gregor; Vigna-Taglianti, Federica; Howe, George; Masyn, Katherine; Wang, Wei; Muthén, Bengt; Stephens, Peggy; Grey, Scott; Perrino, Tatiana
2011-01-01
This paper presents new methods for synthesizing results from subgroup and moderation analyses across different randomized trials. We demonstrate that such a synthesis generally results in additional power to detect significant moderation findings above what one would find in a single trial. Three general methods for conducting synthesis analyses are discussed, with two methods, integrative data analysis, and parallel analyses, sharing a large advantage over traditional methods available in meta-analysis. We present a broad class of analytic models to examine moderation effects across trials that can be used to assess their overall effect and explain sources of heterogeneity, and present ways to disentangle differences across trials due to individual differences, contextual level differences, intervention, and trial design. PMID:21360061
Methods for synthesizing findings on moderation effects across multiple randomized trials.
Brown, C Hendricks; Sloboda, Zili; Faggiano, Fabrizio; Teasdale, Brent; Keller, Ferdinand; Burkhart, Gregor; Vigna-Taglianti, Federica; Howe, George; Masyn, Katherine; Wang, Wei; Muthén, Bengt; Stephens, Peggy; Grey, Scott; Perrino, Tatiana
2013-04-01
This paper presents new methods for synthesizing results from subgroup and moderation analyses across different randomized trials. We demonstrate that such a synthesis generally results in additional power to detect significant moderation findings above what one would find in a single trial. Three general methods for conducting synthesis analyses are discussed, with two methods, integrative data analysis and parallel analyses, sharing a large advantage over traditional methods available in meta-analysis. We present a broad class of analytic models to examine moderation effects across trials that can be used to assess their overall effect and explain sources of heterogeneity, and present ways to disentangle differences across trials due to individual differences, contextual level differences, intervention, and trial design.
Grow, Laura L; Kodak, Tiffany; Carr, James E
2014-01-01
Previous research has demonstrated that the conditional-only method (starting with a multiple-stimulus array) is more efficient than the simple-conditional method (progressive incorporation of more stimuli into the array) for teaching receptive labeling to children with autism spectrum disorders (Grow, Carr, Kodak, Jostad, & Kisamore,). The current study systematically replicated the earlier study by comparing the 2 approaches using progressive prompting with 2 boys with autism. The results showed that the conditional-only method was a more efficient and reliable teaching procedure than the simple-conditional method. The results further call into question the practice of teaching simple discriminations to facilitate acquisition of conditional discriminations. © Society for the Experimental Analysis of Behavior.
Cheng's method for reconstruction of a functionally sensitive penis.
Cheng, K X; Zhang, R H; Zhou, S; Jiang, K C; Eid, A E; Huang, W Y
1997-01-01
This article introduces a new surgical method for one-stage reconstruction of the penis. It is applied to the reconstruction of the microphallus as well as to traumatic cases with the residual stump of the amputated penis not less than 3 cm long. By transferring the original glans or the residual penile stump to the anterior portion of the newly reconstructed penile body with microsurgical techniques, we have thus rebuilt a penis with more satisfactory results in both appearance and erotic sensation. Seven patients are reported here who were operated on by this method and who have been followed up for 18 months to 10 years. The good results achieved and the method's advantages over other methods are demonstrated and discussed.
Probing the extensive nature of entropy
NASA Astrophysics Data System (ADS)
Salagaram, T.; Chetty, N.
2013-08-01
We have devised a general numerical scheme applied to a system of independent, distinguishable, non-interacting particles, to demonstrate in a direct manner the extensive nature of statistical entropy. Working within the microcanonical ensemble, our methods enable one to directly monitor the approach to the thermodynamic limit (N → ∞) in a manner that has not been known before. We show that (sN - s∞) → N-α where sN is the entropy per particle for N particles and S∞ is the entropy per particle in the thermodynamic limit. We demonstrate universal behaviour by considering a number of different systems each defined by its unique single-particle spectrum. Various thermodynamic quantities as a function of N may be computed using our methods; in this paper, we focus on the entropy, the chemical potential and the temperature. Our results are applicable to systems of finite size, e.g. nano-particle systems. Furthermore, we demonstrate a new phenomenon, referred to as entropic interference, which manifests as a cancellation of terms in the thermodynamic limit and which results in the additive nature of entropy.
Tahan, Gabriella Padovani; Santos, Nayara de Kássia Souza; Albuquerque, Ana Carolina; Martins, Isarita
2016-08-01
Parabens are the most widely used preservative and are considered to be relatively safe compounds. However, studies have demonstrated that they may have estrogenic activity, and there is ongoing debate regarding the safety and potential cancer risk of using products containing these compounds. In the present work, liquid chromatography-tandem mass spectrometry was applied to determine methylparaben and propylparaben concentrations in serum, and the results were correlated with lipstick application. Samples were analyzed using liquid-liquid extraction, followed by liquid chromatography-tandem mass spectrometry. The validation results demonstrated the linearity of the method over a range of 1-20 ng/mL, in addition to the method's precision and accuracy. A statistically significant difference was demonstrated between serum parabens in women who used lipstick containing these substances compared with those not using this cosmetic (p = 0.0005 and 0.0016, respectively), and a strong association was observed between serum parabens and lipstick use (Spearman correlation = 0.7202). Copyright © 2016 Elsevier Inc. All rights reserved.
Two- and three-photon ionization of hydrogen and lithium
NASA Technical Reports Server (NTRS)
Chang, T. N.; Poe, R. T.
1977-01-01
We present the detailed result of a calculation on two- and three-photon ionization of hydrogen and lithium based on a recently proposed calculational method. Our calculation has demonstrated that this method is capable of retaining the numerical advantages enjoyed by most of the existing calculational methods and, at the same time, circumventing their limitations. In particular, we have concentrated our discussion on the relative contribution from the resonant and nonresonant intermediate states.
Speckle: tool for diagnosis assistance
NASA Astrophysics Data System (ADS)
Carvalho, O.; Guyot, S.; Roy, L.; Benderitter, M.; Clairac, B.
2006-09-01
In this paper, we present a new approach of the speckle phenomenon. This method is based on the fractal Brownian motion theory and allows the extraction of three stochastic parameters to characterize the speckle pattern. For the first time, we present the results of this method applied to the discrimination of the healthy vs. pathologic skin. We also demonstrate, in case of the scleroderma, than this method is more accurate than the classical frequential approach.
Real-Time Aerodynamic Parameter Estimation without Air Flow Angle Measurements
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2010-01-01
A technique for estimating aerodynamic parameters in real time from flight data without air flow angle measurements is described and demonstrated. The method is applied to simulated F-16 data, and to flight data from a subscale jet transport aircraft. Modeling results obtained with the new approach using flight data without air flow angle measurements were compared to modeling results computed conventionally using flight data that included air flow angle measurements. Comparisons demonstrated that the new technique can provide accurate aerodynamic modeling results without air flow angle measurements, which are often difficult and expensive to obtain. Implications for efficient flight testing and flight safety are discussed.
Using 3D computer simulations to enhance ophthalmic training.
Glittenberg, C; Binder, S
2006-01-01
To develop more effective methods of demonstrating and teaching complex topics in ophthalmology with the use of computer aided three-dimensional (3D) animation and interactive multimedia technologies. We created 3D animations and interactive computer programmes demonstrating the neuroophthalmological nature of the oculomotor system, including the anatomy, physiology and pathophysiology of the extra-ocular eye muscles and the oculomotor cranial nerves, as well as pupillary symptoms of neurological diseases. At the University of Vienna we compared their teaching effectiveness to conventional teaching methods in a comparative study involving 100 medical students, a multiple choice exam and a survey. The comparative study showed that our students achieved significantly better test results (80%) than the control group (63%) (diff. = 17 +/- 5%, p = 0.004). The survey showed a positive reaction to the software and a strong preference to have more subjects and techniques demonstrated in this fashion. Three-dimensional computer animation technology can significantly increase the quality and efficiency of the education and demonstration of complex topics in ophthalmology.
INNOVATIVE TECHNOLOGY VERIFICATION REPORT " ...
The EnSys Petro Test System developed by Strategic Diagnostics Inc. (SDI), was demonstrated under the U.S. Environmental Protection Agency Superfund Innovative Technology Evaluation Program in June 2000 at the Navy Base Ventura County site in Port Hueneme, California. The purpose of the demonstration was to collect reliable performance and cost data for the EnSys Petro Test System and six other field measurement devices for total petroleum hydrocarbons (TPH) in soil. In addition to assessing ease of device operation, the key objectives of the demonstration included determining the (1) method detection limit, (2) accuracy and precision, (3) effects of interferents and soil moisture content on TPH measurement, (4) sample throughput, and (5) TPH measurement costs for each device. The demonstration involved analysis of both performance evaluation samples and environmental samples collected in four areas contaminated with gasoline, diesel, or other petroleum products. The performance and cost results for a given field measurement device were compared to those for an off-site laboratory reference method,
INNOVATIVE TECHNOLOGY VERIFICATION REPORT " ...
The Synchronous Scanning Luminoscope (Luminoscope) developed by the Oak Ridge National Laboratory in collaboration with Environmental Systems Corporation (ESC) was demonstrated under the U.S. Environmental Protection Agency Superfund Innovative Technology Evaluation Program in June 2000 at the Navy Base Ventura County site in Port Hueneme, California. The purpose of the demonstration was to collect reliable performance and cost data for the Luminoscope and six other field measurement devices for total petroleum hydrocarbons (TPH) in soil. In addition to assessing ease of device operation, the key objectives of the demonstration included determining the (1) method detection limit, (2) accuracy and precision, (3) effects of interferents and soil moisture content on TPH measurement, (4) sample throughput, and (5) TPH measurement costs for each device. The demonstration involved analysis of both performance evaluation samples and environmental samples collected in five areas contaminated with gasoline, diesel, lubricating oil, or other petroleum products. The performance and cost results for a given field measurement device were compared to those for an off-site laboratory reference method,
Park, Seung-Min; Huh, Yun Suk; Szeto, Kylan; Joe, Daniel J; Kameoka, Jun; Coates, Geoffrey W; Edel, Joshua B; Erickson, David; Craighead, Harold G
2010-11-05
Biomolecular transport in nanofluidic confinement offers various means to investigate the behavior of biomolecules in their native aqueous environments, and to develop tools for diverse single-molecule manipulations. Recently, a number of simple nanofluidic fabrication techniques has been demonstrated that utilize electrospun nanofibers as a backbone structure. These techniques are limited by the arbitrary dimension of the resulting nanochannels due to the random nature of electrospinning. Here, a new method for fabricating nanofluidic systems from size-reduced electrospun nanofibers is reported and demonstrated. As it is demonstrated, this method uses the scanned electrospinning technique for generation of oriented sacrificial nanofibers and exposes these nanofibers to harsh, but isotropic etching/heating environments to reduce their cross-sectional dimension. The creation of various nanofluidic systems as small as 20 nm is demonstrated, and practical examples of single biomolecular handling, such as DNA elongation in nanochannels and fluorescence correlation spectroscopic analysis of biomolecules passing through nanochannels, are provided.
Damage Tolerance and Reliability of Turbine Engine Components
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1999-01-01
A formal method is described to quantify structural damage tolerance and reliability in the presence of multitude of uncertainties in turbine engine components. The method is based at the materials behaviour level where primitive variables with their respective scatters are used to describe the behavior. Computational simulation is then used to propagate those uncertainties to the structural scale where damage tolerance and reliability are usually specified. Several sample cases are described to illustrate the effectiveness, versatility, and maturity of the method. Typical results from these methods demonstrate that the methods are mature and that they can be used for future strategic projections and planning to assure better, cheaper, faster, products for competitive advantages in world markets. These results also indicate that the methods are suitable for predicting remaining life in aging or deteriorating structures.
Damage Tolerance and Reliability of Turbine Engine Components
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1998-01-01
A formal method is described to quantify structural damage tolerance and reliability in the presence of multitude of uncertainties in turbine engine components. The method is based at the materials behavior level where primitive variables with their respective scatters are used to describe that behavior. Computational simulation is then used to propagate those uncertainties to the structural scale where damage tolerance and reliability are usually specified. Several sample cases are described to illustrate the effectiveness, versatility, and maturity of the method. Typical results from these methods demonstrate that the methods are mature and that they can be used for future strategic projections and planning to assure better, cheaper, faster products for competitive advantages in world markets. These results also indicate that the methods are suitable for predicting remaining life in aging or deteriorating structures.
Effects of empty bins on image upscaling in capsule endoscopy
NASA Astrophysics Data System (ADS)
Rukundo, Olivier
2017-07-01
This paper presents a preliminary study of the effect of empty bins on image upscaling in capsule endoscopy. The presented study was conducted based on results of existing contrast enhancement and interpolation methods. A low contrast enhancement method based on pixels consecutiveness and modified bilinear weighting scheme has been developed to distinguish between necessary empty bins and unnecessary empty bins in the effort to minimize the number of empty bins in the input image, before further processing. Linear interpolation methods have been used for upscaling input images with stretched histograms. Upscaling error differences and similarity indices between pairs of interpolation methods have been quantified using the mean squared error and feature similarity index techniques. Simulation results demonstrated more promising effects using the developed method than other contrast enhancement methods mentioned.
Aggarwal, Priya; Gupta, Anubha
2017-12-01
A number of reconstruction methods have been proposed recently for accelerated functional Magnetic Resonance Imaging (fMRI) data collection. However, existing methods suffer with the challenge of greater artifacts at high acceleration factors. This paper addresses the issue of accelerating fMRI collection via undersampled k-space measurements combined with the proposed method based on l 1 -l 1 norm constraints, wherein we impose first l 1 -norm sparsity on the voxel time series (temporal data) in the transformed domain and the second l 1 -norm sparsity on the successive difference of the same temporal data. Hence, we name the proposed method as Double Temporal Sparsity based Reconstruction (DTSR) method. The robustness of the proposed DTSR method has been thoroughly evaluated both at the subject level and at the group level on real fMRI data. Results are presented at various acceleration factors. Quantitative analysis in terms of Peak Signal-to-Noise Ratio (PSNR) and other metrics, and qualitative analysis in terms of reproducibility of brain Resting State Networks (RSNs) demonstrate that the proposed method is accurate and robust. In addition, the proposed DTSR method preserves brain networks that are important for studying fMRI data. Compared to the existing methods, the DTSR method shows promising potential with an improvement of 10-12 dB in PSNR with acceleration factors upto 3.5 on resting state fMRI data. Simulation results on real data demonstrate that DTSR method can be used to acquire accelerated fMRI with accurate detection of RSNs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Interfacial gauge methods for incompressible fluid dynamics
Saye, Robert
2016-01-01
Designing numerical methods for incompressible fluid flow involving moving interfaces, for example, in the computational modeling of bubble dynamics, swimming organisms, or surface waves, presents challenges due to the coupling of interfacial forces with incompressibility constraints. A class of methods, denoted interfacial gauge methods, is introduced for computing solutions to the corresponding incompressible Navier-Stokes equations. These methods use a type of “gauge freedom” to reduce the numerical coupling between fluid velocity, pressure, and interface position, allowing high-order accurate numerical methods to be developed more easily. Making use of an implicit mesh discontinuous Galerkin framework, developed in tandem with this work, high-order results are demonstrated, including surface tension dynamics in which fluid velocity, pressure, and interface geometry are computed with fourth-order spatial accuracy in the maximum norm. Applications are demonstrated with two-phase fluid flow displaying fine-scaled capillary wave dynamics, rigid body fluid-structure interaction, and a fluid-jet free surface flow problem exhibiting vortex shedding induced by a type of Plateau-Rayleigh instability. The developed methods can be generalized to other types of interfacial flow and facilitate precise computation of complex fluid interface phenomena. PMID:27386567
NASA Astrophysics Data System (ADS)
Senegačnik, Jure; Tavčar, Gregor; Katrašnik, Tomaž
2015-03-01
The paper presents a computationally efficient method for solving the time dependent diffusion equation in a granule of the Li-ion battery's granular solid electrode. The method, called Discrete Temporal Convolution method (DTC), is based on a discrete temporal convolution of the analytical solution of the step function boundary value problem. This approach enables modelling concentration distribution in the granular particles for arbitrary time dependent exchange fluxes that do not need to be known a priori. It is demonstrated in the paper that the proposed method features faster computational times than finite volume/difference methods and Padé approximation at the same accuracy of the results. It is also demonstrated that all three addressed methods feature higher accuracy compared to the quasi-steady polynomial approaches when applied to simulate the current densities variations typical for mobile/automotive applications. The proposed approach can thus be considered as one of the key innovative methods enabling real-time capability of the multi particle electrochemical battery models featuring spatial and temporal resolved particle concentration profiles.
Cluster mass inference via random field theory.
Zhang, Hui; Nichols, Thomas E; Johnson, Timothy D
2009-01-01
Cluster extent and voxel intensity are two widely used statistics in neuroimaging inference. Cluster extent is sensitive to spatially extended signals while voxel intensity is better for intense but focal signals. In order to leverage strength from both statistics, several nonparametric permutation methods have been proposed to combine the two methods. Simulation studies have shown that of the different cluster permutation methods, the cluster mass statistic is generally the best. However, to date, there is no parametric cluster mass inference available. In this paper, we propose a cluster mass inference method based on random field theory (RFT). We develop this method for Gaussian images, evaluate it on Gaussian and Gaussianized t-statistic images and investigate its statistical properties via simulation studies and real data. Simulation results show that the method is valid under the null hypothesis and demonstrate that it can be more powerful than the cluster extent inference method. Further, analyses with a single subject and a group fMRI dataset demonstrate better power than traditional cluster size inference, and good accuracy relative to a gold-standard permutation test.
NASA Technical Reports Server (NTRS)
Stiehl, A. L.; Haberman, R. C.; Cowles, J. H.
1988-01-01
An approximate method to compute the maximum deformation and permanent set of a beam subjected to shock wave laoding in vacuo and in water was investigated. The method equates the maximum kinetic energy of the beam (and water) to the elastic plastic work done by a static uniform load applied to a beam. Results for the water case indicate that the plastic deformation is controlled by the kinetic energy of the water. The simplified approach can result in significant savings in computer time or it can expediently be used as a check of results from a more rigorous approach. The accuracy of the method is demonstrated by various examples of beams with simple support and clamped support boundary conditions.
Kanamori, Hajime; Rutala, William A; Gergen, Maria F; Sickbert-Bennett, Emily E; Weber, David J
2018-05-07
Susceptibility to germicides for carbapenem/colistin-resistant Enterobacteriaceae is poorly described. We investigated the efficacy of multiple germicides against these emerging antibiotic-resistant pathogens using the disc-based quantitative carrier test method that can produce results more similar to those encountered in healthcare settings than a suspension test. Our study results demonstrated that germicides commonly used in healthcare facilities likely will be effective against carbapenem/colistin-resistant Enterobacteriaceae when used appropriately in healthcare facilities. Copyright © 2018 American Society for Microbiology.
Pathway Distiller - multisource biological pathway consolidation
2012-01-01
Background One method to understand and evaluate an experiment that produces a large set of genes, such as a gene expression microarray analysis, is to identify overrepresentation or enrichment for biological pathways. Because pathways are able to functionally describe the set of genes, much effort has been made to collect curated biological pathways into publicly accessible databases. When combining disparate databases, highly related or redundant pathways exist, making their consolidation into pathway concepts essential. This will facilitate unbiased, comprehensive yet streamlined analysis of experiments that result in large gene sets. Methods After gene set enrichment finds representative pathways for large gene sets, pathways are consolidated into representative pathway concepts. Three complementary, but different methods of pathway consolidation are explored. Enrichment Consolidation combines the set of the pathways enriched for the signature gene list through iterative combining of enriched pathways with other pathways with similar signature gene sets; Weighted Consolidation utilizes a Protein-Protein Interaction network based gene-weighting approach that finds clusters of both enriched and non-enriched pathways limited to the experiments' resultant gene list; and finally the de novo Consolidation method uses several measurements of pathway similarity, that finds static pathway clusters independent of any given experiment. Results We demonstrate that the three consolidation methods provide unified yet different functional insights of a resultant gene set derived from a genome-wide profiling experiment. Results from the methods are presented, demonstrating their applications in biological studies and comparing with a pathway web-based framework that also combines several pathway databases. Additionally a web-based consolidation framework that encompasses all three methods discussed in this paper, Pathway Distiller (http://cbbiweb.uthscsa.edu/PathwayDistiller), is established to allow researchers access to the methods and example microarray data described in this manuscript, and the ability to analyze their own gene list by using our unique consolidation methods. Conclusions By combining several pathway systems, implementing different, but complementary pathway consolidation methods, and providing a user-friendly web-accessible tool, we have enabled users the ability to extract functional explanations of their genome wide experiments. PMID:23134636
NASA Astrophysics Data System (ADS)
Sun, Dan; Garmory, Andrew; Page, Gary J.
2017-02-01
For flows where the particle number density is low and the Stokes number is relatively high, as found when sand or ice is ingested into aircraft gas turbine engines, streams of particles can cross each other's path or bounce from a solid surface without being influenced by inter-particle collisions. The aim of this work is to develop an Eulerian method to simulate these types of flow. To this end, a two-node quadrature-based moment method using 13 moments is proposed. In the proposed algorithm thirteen moments of particle velocity, including cross-moments of second order, are used to determine the weights and abscissas of the two nodes and to set up the association between the velocity components in each node. Previous Quadrature Method of Moments (QMOM) algorithms either use more than two nodes, leading to increased computational expense, or are shown here to give incorrect results under some circumstances. This method gives the computational efficiency advantages of only needing two particle phase velocity fields whilst ensuring that a correct combination of weights and abscissas is returned for any arbitrary combination of particle trajectories without the need for any further assumptions. Particle crossing and wall bouncing with arbitrary combinations of angles are demonstrated using the method in a two-dimensional scheme. The ability of the scheme to include the presence of drag from a carrier phase is also demonstrated, as is bouncing off surfaces with inelastic collisions. The method is also applied to the Taylor-Green vortex flow test case and is found to give results superior to the existing two-node QMOM method and is in good agreement with results from Lagrangian modelling of this case.
Pixel-based speckle adjustment for noise reduction in Fourier-domain OCT images.
Zhang, Anqi; Xi, Jiefeng; Sun, Jitao; Li, Xingde
2017-03-01
Speckle resides in OCT signals and inevitably effects OCT image quality. In this work, we present a novel method for speckle noise reduction in Fourier-domain OCT images, which utilizes the phase information of complex OCT data. In this method, speckle area is pre-delineated pixelwise based on a phase-domain processing method and then adjusted by the results of wavelet shrinkage of the original image. Coefficient shrinkage method such as wavelet or contourlet is applied afterwards for further suppressing the speckle noise. Compared with conventional methods without speckle adjustment, the proposed method demonstrates significant improvement of image quality.
NASA Technical Reports Server (NTRS)
Madsen, Niel K.
1992-01-01
Several new discrete surface integral (DSI) methods for solving Maxwell's equations in the time-domain are presented. These methods, which allow the use of general nonorthogonal mixed-polyhedral unstructured grids, are direct generalizations of the canonical staggered-grid finite difference method. These methods are conservative in that they locally preserve divergence or charge. Employing mixed polyhedral cells, (hexahedral, tetrahedral, etc.) these methods allow more accurate modeling of non-rectangular structures and objects because the traditional stair-stepped boundary approximations associated with the orthogonal grid based finite difference methods can be avoided. Numerical results demonstrating the accuracy of these new methods are presented.
A new method for calculating ecological flow: Distribution flow method
NASA Astrophysics Data System (ADS)
Tan, Guangming; Yi, Ran; Chang, Jianbo; Shu, Caiwen; Yin, Zhi; Han, Shasha; Feng, Zhiyong; Lyu, Yiwei
2018-04-01
A distribution flow method (DFM) and its ecological flow index and evaluation grade standard are proposed to study the ecological flow of rivers based on broadening kernel density estimation. The proposed DFM and its ecological flow index and evaluation grade standard are applied into the calculation of ecological flow in the middle reaches of the Yangtze River and compared with traditional calculation method of hydrological ecological flow, method of flow evaluation, and calculation result of fish ecological flow. Results show that the DFM considers the intra- and inter-annual variations in natural runoff, thereby reducing the influence of extreme flow and uneven flow distributions during the year. This method also satisfies the actual runoff demand of river ecosystems, demonstrates superiority over the traditional hydrological methods, and shows a high space-time applicability and application value.
NASA Astrophysics Data System (ADS)
Li, Dongsheng; Zhou, Zhi; Ou, Jinping
2012-06-01
Suspenders, as the main bearing components in an arch bridge, can only manage to serve for about tens of years, or even a few years due to the influences of corrosion and fatigue load. This paper proposes a method of testing the suspender dynamic behavior with optical fiber Bragg grating sensors embedded in the glass fiber reinforced polymer (GFRP-OFBGS). Firstly, layout method of FRP-OFBGS among the suspender and protection technology are studied, and the self-monitoring smart suspender is developed. Secondly, stretching experiments were carried out on the smart suspender. The test experimental results demonstrated that the whole procedure of the stretching test can be perfectly monitored. Finally, the self-monitoring smart suspender successfully was applied in Ebian Bridge to monitor the strain history of suspenders under traffic load, and traffic effect to suspenders with various lengths and to different steel strands of a single suspender. Based on the monitoring data, the arch bridge suspenders fatigue damage dynamic evaluation methods and calculation results were given. The field monitoring results demonstrated that, the self-monitoring smart suspender mentioned in this paper is capable of monitoring suspender dynamic response and possible fatigue damages.
Translating expert system rules into Ada code with validation and verification
NASA Technical Reports Server (NTRS)
Becker, Lee; Duckworth, R. James; Green, Peter; Michalson, Bill; Gosselin, Dave; Nainani, Krishan; Pease, Adam
1991-01-01
The purpose of this ongoing research and development program is to develop software tools which enable the rapid development, upgrading, and maintenance of embedded real-time artificial intelligence systems. The goals of this phase of the research were to investigate the feasibility of developing software tools which automatically translate expert system rules into Ada code and develop methods for performing validation and verification testing of the resultant expert system. A prototype system was demonstrated which automatically translated rules from an Air Force expert system was demonstrated which detected errors in the execution of the resultant system. The method and prototype tools for converting AI representations into Ada code by converting the rules into Ada code modules and then linking them with an Activation Framework based run-time environment to form an executable load module are discussed. This method is based upon the use of Evidence Flow Graphs which are a data flow representation for intelligent systems. The development of prototype test generation and evaluation software which was used to test the resultant code is discussed. This testing was performed automatically using Monte-Carlo techniques based upon a constraint based description of the required performance for the system.
García-Florentino, Cristina; Maguregui, Maite; Romera-Fernández, Miriam; Queralt, Ignasi; Margui, Eva; Madariaga, Juan Manuel
2018-05-01
Wavelength dispersive X-ray fluorescence (WD-XRF) spectrometry has been widely used for elemental quantification of mortars and cements. In this kind of instrument, samples are usually prepared as pellets or fused beads and the whole volume of sample is measured at once. In this work, the usefulness of a dual energy dispersive X-ray fluorescence spectrometer (ED-XRF), working at two lateral resolutions (1 mm and 25 μm) for macro and microanalysis respectively, to develop quantitative methods for the elemental characterization of mortars and concretes is demonstrated. A crucial step before developing any quantitative method with this kind of spectrometers is to verify the homogeneity of the standards at these two lateral resolutions. This new ED-XRF quantitative method also demonstrated the importance of matrix effects in the accuracy of the results being necessary to use Certified Reference Materials as standards. The results obtained with the ED-XRF quantitative method were compared with the ones obtained with two WD-XRF quantitative methods employing two different sample preparation strategies (pellets and fused beads). The selected ED-XRF and both WD-XRF quantitative methods were applied to the analysis of real mortars. The accuracy of the ED-XRF results turn out to be similar to the one achieved by WD-XRF, except for the lightest elements (Na and Mg). The results described in this work proved that μ-ED-XRF spectrometers can be used not only for acquiring high resolution elemental map distributions, but also to perform accurate quantitative studies avoiding the use of more sophisticated WD-XRF systems or the acid extraction/alkaline fusion required as destructive pretreatment in Inductively coupled plasma mass spectrometry based procedures.
Sharif, Behzad; Arsanjani, Reza; Dharmakumar, Rohan; Bairey Merz, C. Noel; Berman, Daniel S.; Li, Debiao
2015-01-01
Purpose To develop and test the feasibility of a new method for non-ECG-gated first-pass perfusion (FPP) cardiac MR capable of imaging multiple short-axis slices at the same systolic cardiac phase. Methods A magnetization-driven pulse sequence was developed for non-ECG-gated FPP imaging without saturation-recovery preparation using continuous slice-interleaved radial sampling. The image reconstruction method, dubbed TRACE, employed self-gating based on reconstruction of a real-time image-based navigator combined with reference-constrained compressed sensing. Data from ischemic animal studies (n=5) was used in a simulation framework to evaluate temporal fidelity. Healthy subjects (n=5) were studied using both the proposed and conventional method to compare the myocardial contrast-to-noise ratio (CNR). Patients (n=2) underwent adenosine stress studies using the proposed method. Results Temporal fidelity of the developed method was shown to be sufficient at high heart-rates. The healthy volunteers studies demonstrated normal perfusion and no artifacts. Compared to the conventional scheme, myocardial CNR for the proposed method was slightly higher (8.6±0.6 vs. 8.0±0.7). Patient studies showed stress-induced perfusion defects consistent with invasive angiography. Conclusions The presented methods and results demonstrate feasibility of the proposed approach for high-resolution non-ECG-gated FPP imaging and indicate its potential for achieving desirable image quality (high CNR, no dark-rim artifacts) with a 3-slice spatial coverage, all imaged at the same systolic phase. PMID:26052843
NASA Astrophysics Data System (ADS)
Moustafa, Azza A.; Hegazy, Maha A.; Mohamed, Dalia; Ali, Omnia
2016-02-01
A novel approach for the resolution and quantitation of severely overlapped quaternary mixture of carbinoxamine maleate (CAR), pholcodine (PHL), ephedrine hydrochloride (EPH) and sunset yellow (SUN) in syrup was demonstrated utilizing different spectrophotometric assisted multivariate calibration methods. The applied methods have used different processing and pre-processing algorithms. The proposed methods were partial least squares (PLS), concentration residuals augmented classical least squares (CRACLS), and a novel method; continuous wavelet transforms coupled with partial least squares (CWT-PLS). These methods were applied to a training set in the concentration ranges of 40-100 μg/mL, 40-160 μg/mL, 100-500 μg/mL and 8-24 μg/mL for the four components, respectively. The utilized methods have not required any preliminary separation step or chemical pretreatment. The validity of the methods was evaluated by an external validation set. The selectivity of the developed methods was demonstrated by analyzing the drugs in their combined pharmaceutical formulation without any interference from additives. The obtained results were statistically compared with the official and reported methods where no significant difference was observed regarding both accuracy and precision.
Population clustering based on copy number variations detected from next generation sequencing data.
Duan, Junbo; Zhang, Ji-Gang; Wan, Mingxi; Deng, Hong-Wen; Wang, Yu-Ping
2014-08-01
Copy number variations (CNVs) can be used as significant bio-markers and next generation sequencing (NGS) provides a high resolution detection of these CNVs. But how to extract features from CNVs and further apply them to genomic studies such as population clustering have become a big challenge. In this paper, we propose a novel method for population clustering based on CNVs from NGS. First, CNVs are extracted from each sample to form a feature matrix. Then, this feature matrix is decomposed into the source matrix and weight matrix with non-negative matrix factorization (NMF). The source matrix consists of common CNVs that are shared by all the samples from the same group, and the weight matrix indicates the corresponding level of CNVs from each sample. Therefore, using NMF of CNVs one can differentiate samples from different ethnic groups, i.e. population clustering. To validate the approach, we applied it to the analysis of both simulation data and two real data set from the 1000 Genomes Project. The results on simulation data demonstrate that the proposed method can recover the true common CNVs with high quality. The results on the first real data analysis show that the proposed method can cluster two family trio with different ancestries into two ethnic groups and the results on the second real data analysis show that the proposed method can be applied to the whole-genome with large sample size consisting of multiple groups. Both results demonstrate the potential of the proposed method for population clustering.
Chemometric assessment of enhanced bioremediation of oil contaminated soils.
Soleimani, Mohsen; Farhoudi, Majid; Christensen, Jan H
2013-06-15
Bioremediation is a promising technique for reclamation of oil polluted soils. In this study, six methods for enhancing bioremediation were tested on oil contaminated soils from three refinery areas in Iran (Isfahan, Arak, and Tehran). The methods included bacterial enrichment, planting, and addition of nitrogen and phosphorous, molasses, hydrogen peroxide, and a surfactant (Tween 80). Total petroleum hydrocarbon (TPH) concentrations and CHEMometric analysis of Selected Ion Chromatograms (SIC) termed CHEMSIC method of petroleum biomarkers including terpanes, regular, diaromatic and triaromatic steranes were used for determining the level and type of hydrocarbon contamination. The same methods were used to study oil weathering of 2 to 6 ring polycyclic aromatic compounds (PACs). Results demonstrated that bacterial enrichment and addition of nutrients were most efficient with 50% to 62% removal of TPH. Furthermore, the CHEMSIC results demonstrated that the bacterial enrichment was more efficient in degradation of n-alkanes and low molecular weight PACs as well as alkylated PACs (e.g. C₃-C₄ naphthalenes, C₂ phenanthrenes and C₂-C₃ dibenzothiophenes), while nutrient addition led to a larger relative removal of isoprenoids (e.g. norpristane, pristane and phytane). It is concluded that the CHEMSIC method is a valuable tool for assessing bioremediation efficiency. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
He, Weizhen; Zhu, Yunhao; Feng, Ting; Wang, Huaideng; Yuan, Jie; Xu, Guan; Wang, Xueding; Carson, Paul
2017-03-01
Osteoporosis is a progressive bone disease which is characterized by a decrease in the bone mass and deterioration in bone micro-architecture. In theory, photoacoustic (PA) imaging analysis has potential to obtain the characteristics of the bone effectively. Previous study demonstrated that photoacoustic spectral analysis (PASA) method with the qualified parameter slope could provide an objective assessment of bone microstructure and deterioration. In this study, we tried to compare PASA method with the traditional quantitative ultrasound (QUS) method in osteoporosis assessment. Numerical simulations of both PA and ultrasound (US) signal are performed on computerized tomographic (CT) images of trabecular bone with different bone mineral densities (BMDs). Ex vivo experiments were conducted on porcine femur bone model of different BMDs. We compared the quantified parameter slope and the broadband ultrasound attenuation (BUA) coefficient from the PASA and QUS among different bone models, respectively. Both the simulation and ex vivo experiment results show that bone with low BMD has a higher slope value and lower BUA value. Our result demonstrated that the PASA method has the same efficacy with QUS in bone assessment, considering PA is a non-ionizing, non-invasive technique, PASA method holds potential for clinical diagnosis in osteoporosis and other bone diseases.
ERIC Educational Resources Information Center
Boker, Steven M.; Nesselroade, John R.
2002-01-01
Examined two methods for fitting models of intrinsic dynamics to intraindividual variability data by testing these techniques' behavior in equations through simulation studies. Among the main results is the demonstration that a local linear approximation of derivatives can accurately recover the parameters of a simulated linear oscillator, with…
ERIC Educational Resources Information Center
Koenig, Lane; Fields, Errol L.; Dall, Timothy M.; Ameen, Ansari Z.; Harwood, Henrick J.
This report demonstrates three applications of case-mix methods using regression analysis. The results are used to assess the relative effectiveness of substance abuse treatment providers. The report also examines the ability of providers to improve client employment outcomes, an outcome domain relatively unexamined in the assessment of provider…
A probabilistic approach to composite micromechanics
NASA Technical Reports Server (NTRS)
Stock, T. A.; Bellini, P. X.; Murthy, P. L. N.; Chamis, C. C.
1988-01-01
Probabilistic composite micromechanics methods are developed that simulate expected uncertainties in unidirectional fiber composite properties. These methods are in the form of computational procedures using Monte Carlo simulation. A graphite/epoxy unidirectional composite (ply) is studied to demonstrate fiber composite material properties at the micro level. Regression results are presented to show the relative correlation between predicted and response variables in the study.
40 CFR 63.5997 - How do I conduct tests and procedures for tire cord production affected sources?
Code of Federal Regulations, 2010 CFR
2010-07-01
...) to confirm the reported HAP content. If the results of an analysis by EPA Method 311 are different... determinations. (2) Unless you demonstrate otherwise, the HAP content analysis must be based on coatings prior to...) Methods to determine the mass percent of each HAP in coatings. (1) To determine the HAP content in the...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanova, T.; Laville, C.; Dyrda, J.
2012-07-01
The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplificationsmore » impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)« less
Health effects of indebtedness: a systematic review
2014-01-01
Background In the aftermath of the global financial crisis, millions of households have been left with debts that they are unable to manage. Indebtedness may impair the wellbeing of those affected by it for years to come. This systematic review focuses on the long-term consequences of indebtedness on health. Methods The method used in the paper is a systematic review. First, bibliographic databases were searched for peer-reviewed articles. Second, the references and citations of the included articles were searched for additional articles. Results The results from our sample of 33 peer-reviewed studies demonstrate serious health effects related to indebtedness. Individuals with unmet loan payments had suicidal ideation and suffered from depression more often than those without such financial problems. Unpaid financial obligations were also related to poorer subjective health and health-related behaviour. Debt counselling and other programmes to mitigate debt-related stress are needed to alleviate the adverse effects of indebtedness on health. Conclusions The results demonstrate that indebtedness has serious effects on health. PMID:24885280
Transmission loss of plates with embedded acoustic black holes.
Feurtado, Philip A; Conlon, Stephen C
2017-09-01
In recent years acoustic black holes (ABHs) have been developed and demonstrated as an effective method for developing lightweight, high loss structures for noise and vibration control. ABHs employ a local thickness change to tailor the speed and amplitude of flexural bending waves and create concentrated regions of high strain energy which can be effectively dissipated through conventional damping treatments. These regions act as energy sinks which allow for effective broadband vibration absorption with minimal use of applied damping material. This, combined with the reduced mass from the thickness tailoring, results in a treated structure with higher loss and less mass than the original. In this work, the transmission loss (TL) of plates with embedded ABHs was investigated using experimental and numerical methods in order to assess the usefulness of ABH systems for TL applications. The results demonstrated that damped ABH plates offer improved performance compared to a uniform plate despite having less mass. The result will be useful for applying ABHs and ABH systems to practical noise and vibration control problems.
Saegusa, J; Kurikami, H; Yasuda, R; Kurihara, K; Arai, S; Kuroki, R; Matsuhashi, S; Ozawa, T; Goto, H; Takano, T; Mitamura, H; Nagano, T; Naganawa, H; Yoshida, Z; Funaki, H; Tokizawa, T; Nakayama, S
2013-03-01
Because of radioactive fallout resulting from the Fukushima Daiichi Nuclear Power Plant (NPP) accident, water discharge from many outdoor swimming pools in Fukushima was suspended out of concern that radiocesium in the pool water would flow into farmlands. The Japan Atomic Energy Agency has reviewed the existing flocculation method for decontaminating pool water and established a practical decontamination method by demonstrating the process at eight pools in Fukushima. In this method, zeolite powder and a flocculant are used for capturing radiocesium present in pool water. The supernatant is discharged if the radiocesium concentration is less than the targeted level. The radioactive residue is collected and stored in a temporary storage space. Radioactivity concentration in water is measured with a NaI(Tl) or Ge detector installed near the pool. The demonstration results showed that the pool water in which the radiocesium concentration was more than a few hundred Bq L was readily purified by the method, and the radiocesium concentration was reduced to less than 100 Bq L. The ambient dose rates around the temporary storage space were slightly elevated; however, the total increase was up to 30% of the background dose rates when the residue was shielded with sandbags.
Zhang, Yuwei; Cao, Zexing; Zhang, John Zenghui; Xia, Fei
2017-02-27
Construction of coarse-grained (CG) models for large biomolecules used for multiscale simulations demands a rigorous definition of CG sites for them. Several coarse-graining methods such as the simulated annealing and steepest descent (SASD) based on the essential dynamics coarse-graining (ED-CG) or the stepwise local iterative optimization (SLIO) based on the fluctuation maximization coarse-graining (FM-CG), were developed to do it. However, the practical applications of these methods such as SASD based on ED-CG are subject to limitations because they are too expensive. In this work, we extend the applicability of ED-CG by combining it with the SLIO algorithm. A comprehensive comparison of optimized results and accuracy of various algorithms based on ED-CG show that SLIO is the fastest as well as the most accurate algorithm among them. ED-CG combined with SLIO could give converged results as the number of CG sites increases, which demonstrates that it is another efficient method for coarse-graining large biomolecules. The construction of CG sites for Ras protein by using MD fluctuations demonstrates that the CG sites derived from FM-CG can reflect the fluctuation properties of secondary structures in Ras accurately.
Lane, J.W.; Buursink, M.L.; Haeni, F.P.; Versteeg, R.J.
2000-01-01
The suitability of common-offset ground-penetrating radar (GPR) to detect free-phase hydrocarbons in bedrock fractures was evaluated using numerical modeling and physical experiments. The results of one- and two-dimensional numerical modeling at 100 megahertz indicate that GPR reflection amplitudes are relatively insensitive to fracture apertures ranging from 1 to 4 mm. The numerical modeling and physical experiments indicate that differences in the fluids that fill fractures significantly affect the amplitude and the polarity of electromagnetic waves reflected by subhorizontal fractures. Air-filled and hydrocarbon-filled fractures generate low-amplitude reflections that are in-phase with the transmitted pulse. Water-filled fractures create reflections with greater amplitude and opposite polarity than those reflections created by air-filled or hydrocarbon-filled fractures. The results from the numerical modeling and physical experiments demonstrate it is possible to distinguish water-filled fracture reflections from air- or hydrocarbon-filled fracture reflections, nevertheless subsurface heterogeneity, antenna coupling changes, and other sources of noise will likely make it difficult to observe these changes in GPR field data. This indicates that the routine application of common-offset GPR reflection methods for detection of hydrocarbon-filled fractures will be problematic. Ideal cases will require appropriately processed, high-quality GPR data, ground-truth information, and detailed knowledge of subsurface physical properties. Conversely, the sensitivity of GPR methods to changes in subsurface physical properties as demonstrated by the numerical and experimental results suggests the potential of using GPR methods as a monitoring tool. GPR methods may be suited for monitoring pumping and tracer tests, changes in site hydrologic conditions, and remediation activities.The suitability of common-offset ground-penetrating radar (GPR) to detect free-phase hydrocarbons in bedrock fractures was evaluated using numerical modeling and physical experiments. The results of one- and two-dimensional numerical modeling at 100 megahertz indicate that GPR reflection amplitudes are relatively insensitive to fracture apertures ranging from 1 to 4 mm. The numerical modeling and physical experiments indicate that differences in the fluids that fill fractures significantly affect the amplitude and the polarity of electromagnetic waves reflected by subhorizontal fractures. Air-filled and hydrocarbon-filled fractures generate low-amplitude reflections that are in-phase with the transmitted pulse. Water-filled fractures create reflections with greater amplitude and opposite polarity than those reflections created by air-filled or hydrocarbon-filled fractures. The results from the numerical modeling and physical experiments demonstrate it is possible to distinguish water-filled fracture reflections from air- or hydrocarbon-filled fracture reflections, nevertheless subsurface heterogeneity, antenna coupling changes, and other sources of noise will likely make it difficult to observe these changes in GPR field data. This indicates that the routine application of common-offset GPR reflection methods for detection of hydrocarbon-filled fractures will be problematic. Ideal cases will require appropriately processed, high-quality GPR data, ground-truth information, and detailed knowledge of subsurface physical properties. Conversely, the sensitivity of GPR methods to changes in subsurface physical properties as demonstrated by the numerical and experimental results suggests the potential of using GPR methods as a monitoring tool. GPR methods may be suited for monitoring pumping and tracer tests, changes in site hydrologic conditions, and remediation activities.
Neck pain assessment in a virtual environment.
Sarig-Bahat, Hilla; Weiss, Patrice L Tamar; Laufer, Yocheved
2010-02-15
Neck-pain and control group comparative analysis of conventional and virtual reality (VR)-based assessment of cervical range of motion (CROM). To use a tracker-based VR system to compare CROM of individuals suffering from chronic neck pain with CROM of asymptomatic individuals; to compare VR system results with those obtained during conventional assessment; to present the diagnostic value of CROM measures obtained by both assessments; and to demonstrate the effect of a single VR session on CROM. Neck pain is a common musculoskeletal complaint with a reported annual prevalence of 30% to 50%. In the absence of a gold standard for CROM assessment, a variety of assessment devices and methodologies exist. Common to these methodologies, assessment of CROM is carried out by instructing subjects to move their head as far as possible. However, these elicited movements do not necessarily replicate functional movements which occur spontaneously in response to multiple stimuli. To achieve a more functional approach to cervical motion assessment, we have recently developed a VR environment in which electromagnetic tracking is used to monitor cervical motion while participants are involved in a simple yet engaging gaming scenario. CROM measures were collected from 25 symptomatic and 42 asymptomatic individuals using VR and conventional assessments. Analysis of variance was used to determine differences between groups and assessment methods. Logistic regression analysis, using a single predictor, compared the diagnostic ability of both methods. Results obtained by both methods demonstrated significant CROM limitations in the symptomatic group. The VR measures showed greater CROM and sensitivity while conventional measures showed greater specificity. A single session exposure to VR resulted in a significant increase in CROM. Neck pain is significantly associated with reduced CROM as demonstrated by both VR and conventional assessment methods. The VR method provides assessment of functional CROM and can be used for CROM enhancement. Assessment by VR has greater sensitivity than conventional assessment and can be used for the detection of true symptomatic individuals.
Localizing ECoG electrodes on the cortical anatomy without post-implantation imaging
Gupta, Disha; Hill, N. Jeremy; Adamo, Matthew A.; Ritaccio, Anthony; Schalk, Gerwin
2014-01-01
Introduction Electrocorticographic (ECoG) grids are placed subdurally on the cortex in people undergoing cortical resection to delineate eloquent cortex. ECoG signals have high spatial and temporal resolution and thus can be valuable for neuroscientific research. The value of these data is highest when they can be related to the cortical anatomy. Existing methods that establish this relationship rely either on post-implantation imaging using computed tomography (CT), magnetic resonance imaging (MRI) or X-Rays, or on intra-operative photographs. For research purposes, it is desirable to localize ECoG electrodes on the brain anatomy even when post-operative imaging is not available or when intra-operative photographs do not readily identify anatomical landmarks. Methods We developed a method to co-register ECoG electrodes to the underlying cortical anatomy using only a pre-operative MRI, a clinical neuronavigation device (such as BrainLab VectorVision), and fiducial markers. To validate our technique, we compared our results to data collected from six subjects who also had post-grid implantation imaging available. We compared the electrode coordinates obtained by our fiducial-based method to those obtained using existing methods, which are based on co-registering pre- and post-grid implantation images. Results Our fiducial-based method agreed with the MRI–CT method to within an average of 8.24 mm (mean, median = 7.10 mm) across 6 subjects in 3 dimensions. It showed an average discrepancy of 2.7 mm when compared to the results of the intra-operative photograph method in a 2D coordinate system. As this method does not require post-operative imaging such as CTs, our technique should prove useful for research in intra-operative single-stage surgery scenarios. To demonstrate the use of our method, we applied our method during real-time mapping of eloquent cortex during a single-stage surgery. The results demonstrated that our method can be applied intra-operatively in the absence of post-operative imaging to acquire ECoG signals that can be valuable for neuroscientific investigations. PMID:25379417
A Clinical Feasibility Study of Atrial and Ventricular Electromechanical Wave Imaging
Provost, Jean; Gambhir, Alok; Vest, John; Garan, Hasan; Konofagou, Elisa E.
2014-01-01
Background Cardiac Resynchronization Therapy (CRT) and atrial ablation currently lack a noninvasive imaging modality for reliable treatment planning and monitoring. Electromechanical Wave Imaging (EWI) is an ultrasound-based method that has previously been shown to be capable of noninvasively and transmurally mapping the activation sequence of the heart in animal studies by estimating and imaging the electromechanical wave, i.e., the transient strains occurring in response to the electrical activation, at both very high temporal and spatial resolution. Objective Demonstrate the feasibility of noninvasive transthoracic EWI for mapping the activation sequence during different cardiac rhythms in humans. Methods EWI was performed in CRT patients with a left bundle-branch block (LBBB), during sinus rhythm, left-ventricular pacing, and right-ventricular pacing and in atrial flutter (AFL) patients before intervention and correlated with results from invasive intracardiac electrical mapping studies during intervention. Additionally, the feasibility of single-heartbeat EWI at 2000 frames/s, is demonstrated in humans for the first time in a subject with both AFL and right bundle-branch-block. Results The electromechanical activation maps demonstrated the capability of EWI to localize the pacing sites and characterize the LBBB activation sequence transmurally in CRT patients. In AFL patients, the propagation patterns obtained with EWI were in agreement with results obtained from invasive intracardiac mapping studies. Conclusion Our findings demonstrate the potential capability of EWI to aid in monitoring and follow-up of patients undergoing CRT pacing therapy and atrial ablation with preliminary validation in vivo. PMID:23454060
A Preliminary Study of the Effectiveness of Different Recitation Teaching Methods
NASA Astrophysics Data System (ADS)
Endorf, Robert J.; Koenig, Kathleen M.; Braun, Gregory A.
2006-02-01
We present preliminary results from a comparative study of student understanding for students who attended recitation classes which used different teaching methods. Student volunteers from our introductory calculus-based physics course attended a special recitation class that was taught using one of four different teaching methods. A total of 272 students were divided into approximately equal groups for each method. Students in each class were taught the same topic, "Changes in energy and momentum," from Tutorials in Introductory Physics. The different teaching methods varied in the amount of student and teacher engagement. Student understanding was evaluated through pretests and posttests given at the recitation class. Our results demonstrate the importance of the instructor's role in teaching recitation classes. The most effective teaching method was for students working in cooperative learning groups with the instructors questioning the groups using Socratic dialogue. These results provide guidance and evidence for the teaching methods which should be emphasized in training future teachers and faculty members.
BAYESIAN BICLUSTERING FOR PATIENT STRATIFICATION.
Khakabimamaghani, Sahand; Ester, Martin
2016-01-01
The move from Empirical Medicine towards Personalized Medicine has attracted attention to Stratified Medicine (SM). Some methods are provided in the literature for patient stratification, which is the central task of SM, however, there are still significant open issues. First, it is still unclear if integrating different datatypes will help in detecting disease subtypes more accurately, and, if not, which datatype(s) are most useful for this task. Second, it is not clear how we can compare different methods of patient stratification. Third, as most of the proposed stratification methods are deterministic, there is a need for investigating the potential benefits of applying probabilistic methods. To address these issues, we introduce a novel integrative Bayesian biclustering method, called B2PS, for patient stratification and propose methods for evaluating the results. Our experimental results demonstrate the superiority of B2PS over a popular state-of-the-art method and the benefits of Bayesian approaches. Our results agree with the intuition that transcriptomic data forms a better basis for patient stratification than genomic data.
Safety training for working youth: Methods used versus methods wanted.
Zierold, Kristina M
2016-04-07
Safety training is promoted as a tool to prevent workplace injury; however, little is known about the safety training experiences young workers get on-the-job. Furthermore, nothing is known about what methods they think would be the most helpful for learning about safe work practices. To compare safety training methods teens get on the job to those safety training methods teens think would be the best for learning workplace safety, focusing on age differences. A cross-sectional survey was administered to students in two large high schools in spring 2011. Seventy percent of working youth received safety training. The top training methods that youth reported getting at work were safety videos (42%), safety lectures (25%), and safety posters/signs (22%). In comparison to the safety training methods used, the top methods youth wanted included videos (54%), hands-on (47%), and on-the-job demonstrations (34%). This study demonstrated that there were differences in training methods that youth wanted by age; with older youth seemingly wanting more independent methods of training and younger teens wanting more involvement. Results indicate that youth want methods of safety training that are different from what they are getting on the job. The differences in methods wanted by age may aid in developing training programs appropriate for the developmental level of working youth.
2013-01-01
Background Aconitum is an indispensable entity of the traditional medicine therapy in Ayurveda and Traditional Chinese medicine (TCM), in spite of its known fatal toxicity characteristics. The prolonged use of this drug, irrespective of its known lethal effects, is governed by the practice of effective detoxification processes that have been used for decades. However, the processing methods of Ayurveda and TCM are different, and no comparative study has been carried out to evaluate their differences. The objective of the present study was to carry out comparative chemical profiling of the roots of Aconitum heterophyllum Wall, A. carmichaelii Debx., and A. kusnezoffii Reichb. after application of two detoxification methods used in Ayurveda and one method used in TCM . Results Analysis of the processed samples was carried out by ultra-high performance liquid chromatography combined with quadrupole time-of-flight mass spectrometry (UHPLC-QTOF/MS). The results obtained in the study demonstrate that all three processing methods used in Ayurveda and TCM effectively extract the diester diterpenoid alkaloids and led to their conversion into monoester diterpenoid alkaloids. The efficiency of the processes in reduction of toxic alkaloid contents can be stated as: Processing with water > Shodhana with cow milk > Shodhana with cow urine. The analysis method was validated as per ICH-Q2R1 guidelines and all the parameters were found to comply with the recommendations stated in the guidelines. Conclusions There have been no reports till date, to compare the processing methods used in Ayurveda with the methods used in TCM for detoxification of aconite roots. Our study demonstrates that, these methods used in both the traditional systems of medicine, efficiently detoxify the aconite roots. Amongst the three selected procedures, the TCM method of decoction with water is the most efficient. Through experimental evidences, we prove the conversion of toxic diester diterpenoid alkaloids to relatively safer monoester diterpenoid alkaloids. Thus, this study demonstrates that comparative study on the traditional experiences accumulated in different medical systems is useful for expanding their respective applications. PMID:24156713
A novel method for producing microspheres with semipermeable polymer membranes
NASA Technical Reports Server (NTRS)
Lin, K. C.; Wang, Taylor G.
1992-01-01
A new and systematic approach for producing polymer microspheres has been demonstrated. The membrane of the microsphere is formed by immersing the polyanionic droplet into a collapsing annular sheet, which is made of another polycation polymer solution. This method minimizes the impact force during the time when the chemical reaction takes place, hence eliminating the shortcomings of the current encapsulation techniques. The results of this study show the feasibility of this method for mass production of microcapsules.
Detection method of flexion relaxation phenomenon based on wavelets for patients with low back pain
NASA Astrophysics Data System (ADS)
Nougarou, François; Massicotte, Daniel; Descarreaux, Martin
2012-12-01
The flexion relaxation phenomenon (FRP) can be defined as a reduction or silence of myoelectric activity of the lumbar erector spinae muscle during full trunk flexion. It is typically absent in patients with chronic low back pain (LBP). Before any broad clinical utilization of this neuromuscular response can be made, effective, standardized, and accurate methods of identifying FRP limits are needed. However, this phenomenon is clearly more difficult to detect for LBP patients than for healthy patients. The main goal of this study is to develop an automated method based on wavelet transformation that would improve time point limits detection of surface electromyography signals of the FRP in case of LBP patients. Conventional visual identification and proposed automated methods of time point limits detection of relaxation phase were compared on experimental data using criteria of accuracy and repeatability based on physiological properties. The evaluation demonstrates that the use of wavelet transform (WT) yields better results than methods without wavelet decomposition. Furthermore, methods based on wavelet per packet transform are more effective than algorithms employing discrete WT. Compared to visual detection, in addition to demonstrating an obvious saving of time, the use of wavelet per packet transform improves the accuracy and repeatability in the detection of the FRP limits. These results clearly highlight the value of the proposed technique in identifying onset and offset of the flexion relaxation response in LBP subjects.
de la Fuente-Salcido, Norma M.; Barboza-Corona, J. Eleazar; Espino Monzón, A. N.; Pacheco Cano, R. D.; Balagurusamy, N.; Bideshi, Dennis K.; Salcedo-Hernández, Rubén
2012-01-01
Previously we described a rapid fluorogenic method to measure the activity of five bacteriocins produced by Mexican strains of Bacillus thuringiensis against B. cereus 183. Here we standardize this method to efficiently determine the activity of bacteriocins against both Gram-positive and Gram-negative bacteria. It was determined that the crucial parameter required to obtain reproducible results was the number of cells used in the assay, that is, ~4 × 108 cell/mL and ~7 × 108 cell/mL, respectively, for target Gram-positive and Gram-negative bacteria. Comparative analyses of the fluorogenic and traditional well-diffusion assays showed correlation coefficients of 0.88 to 0.99 and 0.83 to 0.99, respectively, for Gram-positive and Gram-negative bacteria. The fluorogenic method demonstrated that the five bacteriocins of B. thuringiensis have bacteriolytic and bacteriostatic activities against all microorganisms tested, including clinically significant bacteria such as Listeria monocytogenes, Proteus vulgaris, and Shigella flexneri reported previously to be resistant to the antimicrobials as determined using the well-diffusion protocol. These results demonstrate that the fluorogenic assay is a more sensitive, reliable, and rapid method when compared with the well-diffusion method and can easily be adapted in screening protocols for bacteriocin production by other microorganisms. PMID:22919330
Han, Guomin; Wang, Hua; Webb, Michael R; Waterhouse, Andrew L
2015-03-01
Carbonyl compounds are produced during fermentation and chemical oxidation during wine making and aging, and they are important to wine flavor and color stability. Since wine also contains these compounds as α-hydroxysulfonates as a result of their reaction with sulfur dioxide, an alkaline pre-treatment requiring oxygen exclusion has been used to release these bound carbonyls for analysis. By modifying the method to hydrolyze the hydroxysulfonates with heating and acid in the presence of 2,4-dinitrophenylhydrazine (DNPH), the carbonyl compounds are simultaneously and quickly released and derivatized, resulting in a simpler and more rapid method. In addition, the method avoids air exclusion complications during hydrolysis by the addition of sulfur dioxide. The method was optimized for temperature, reaction time, and the concentrations of DNPH, sulfur dioxide and acid. The hydrazones were shown to be stable for 10 h, adequate time for chromatographic analysis by HPLC-DAD/MS. This method is demonstrated for 2-ketoglutaric acid, pyruvic acid, acetoin and acetaldehyde, wine carbonyls of very different reactivities, and it offers good specificity, high recovery and low limits of detection. This new rapid, simple method is demonstrated for the measurement of carbonyl compounds in a range of wines of different ages and grape varieties. Copyright © 2014 Elsevier B.V. All rights reserved.
Oliveira, Fernanda Granja da Silva; de Lima-Saraiva, Sarah Raquel Gomes; Oliveira, Ana Paula; Rabêlo, Suzana Vieira; Rolim, Larissa Araújo; Almeida, Jackson Roberto Guedes da Silva
2016-01-01
Background: Popularly known as “jatobá,” Hymenaea martiana Hayne is a medicinal plant widely used in the Brazilian Northeast for the treatment of various diseases. Objective: The aim of this study was to evaluate the influence of different extractive methods in the production of phenolic compounds from different parts of H. martiana. Materials and Methods: The leaves, bark, fruits, and seeds were dried, pulverized, and submitted to maceration, ultrasound, and percolation extractive methods, which were evaluated for yield, visual aspects, qualitative phytochemical screening, phenolic compound content, and total flavonoids. Results: The highest results of yield were obtained from the maceration of the leaves, which may be related to the contact time between the plant drug and solvent. The visual aspects of the extracts presented some differences between the extractive methods. The phytochemical screening showed consistent data with other studies of the genus. Both the vegetal part as the different extractive methods influenced significantly the levels of phenolic compounds, and the highest content was found in the maceration of the barks, even more than the content found previously. No differences between the levels of total flavonoids were significant. The highest concentration of total flavonoids was found in the ultrasound of the barks, followed by maceration on this drug. According to the results, the barks of H. martiana presented the highest total flavonoid contents. Conclusion: The results demonstrate that both the vegetable and the different extractive methods influenced significantly various parameters obtained in the various extracts, demonstrating the importance of systematic comparative studies for the development of pharmaceuticals and cosmetics. SUMMARY The phytochemical screening showed consistent data with other studies of the genus HymenaeaBoth the vegetable part and the different extractive methods influenced significantly various parameters obtained in the various extracts, including the levels of phenolic compoundsThe barks of H. martiana presented the highest total phenolic and flavonoid contents. PMID:27695267
Allnutt, Thomas F.; McClanahan, Timothy R.; Andréfouët, Serge; Baker, Merrill; Lagabrielle, Erwann; McClennen, Caleb; Rakotomanjaka, Andry J. M.; Tianarisoa, Tantely F.; Watson, Reg; Kremen, Claire
2012-01-01
The Government of Madagascar plans to increase marine protected area coverage by over one million hectares. To assist this process, we compare four methods for marine spatial planning of Madagascar's west coast. Input data for each method was drawn from the same variables: fishing pressure, exposure to climate change, and biodiversity (habitats, species distributions, biological richness, and biodiversity value). The first method compares visual color classifications of primary variables, the second uses binary combinations of these variables to produce a categorical classification of management actions, the third is a target-based optimization using Marxan, and the fourth is conservation ranking with Zonation. We present results from each method, and compare the latter three approaches for spatial coverage, biodiversity representation, fishing cost and persistence probability. All results included large areas in the north, central, and southern parts of western Madagascar. Achieving 30% representation targets with Marxan required twice the fish catch loss than the categorical method. The categorical classification and Zonation do not consider targets for conservation features. However, when we reduced Marxan targets to 16.3%, matching the representation level of the “strict protection” class of the categorical result, the methods show similar catch losses. The management category portfolio has complete coverage, and presents several management recommendations including strict protection. Zonation produces rapid conservation rankings across large, diverse datasets. Marxan is useful for identifying strict protected areas that meet representation targets, and minimize exposure probabilities for conservation features at low economic cost. We show that methods based on Zonation and a simple combination of variables can produce results comparable to Marxan for species representation and catch losses, demonstrating the value of comparing alternative approaches during initial stages of the planning process. Choosing an appropriate approach ultimately depends on scientific and political factors including representation targets, likelihood of adoption, and persistence goals. PMID:22359534
Allnutt, Thomas F; McClanahan, Timothy R; Andréfouët, Serge; Baker, Merrill; Lagabrielle, Erwann; McClennen, Caleb; Rakotomanjaka, Andry J M; Tianarisoa, Tantely F; Watson, Reg; Kremen, Claire
2012-01-01
The Government of Madagascar plans to increase marine protected area coverage by over one million hectares. To assist this process, we compare four methods for marine spatial planning of Madagascar's west coast. Input data for each method was drawn from the same variables: fishing pressure, exposure to climate change, and biodiversity (habitats, species distributions, biological richness, and biodiversity value). The first method compares visual color classifications of primary variables, the second uses binary combinations of these variables to produce a categorical classification of management actions, the third is a target-based optimization using Marxan, and the fourth is conservation ranking with Zonation. We present results from each method, and compare the latter three approaches for spatial coverage, biodiversity representation, fishing cost and persistence probability. All results included large areas in the north, central, and southern parts of western Madagascar. Achieving 30% representation targets with Marxan required twice the fish catch loss than the categorical method. The categorical classification and Zonation do not consider targets for conservation features. However, when we reduced Marxan targets to 16.3%, matching the representation level of the "strict protection" class of the categorical result, the methods show similar catch losses. The management category portfolio has complete coverage, and presents several management recommendations including strict protection. Zonation produces rapid conservation rankings across large, diverse datasets. Marxan is useful for identifying strict protected areas that meet representation targets, and minimize exposure probabilities for conservation features at low economic cost. We show that methods based on Zonation and a simple combination of variables can produce results comparable to Marxan for species representation and catch losses, demonstrating the value of comparing alternative approaches during initial stages of the planning process. Choosing an appropriate approach ultimately depends on scientific and political factors including representation targets, likelihood of adoption, and persistence goals.
Multiple-time-stepping generalized hybrid Monte Carlo methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escribano, Bruno, E-mail: bescribano@bcamath.org; Akhmatskaya, Elena; IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC).more » The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.« less
Kolle, Susanne N; Basketter, David; Schrage, Arnhild; Gamer, Armin O; van Ravenzwaay, Bennard; Landsiedel, Robert
2012-08-01
In a previous study, the predictive capacity of a modified local lymph node assay (LLNA) based on cell counts, the LNCC, was demonstrated to be closely similar to that of the original assay. In addition, a range of substances, including some technical/commercial materials and a range of agrochemical formulations (n = 180) have also been assessed in both methods in parallel. The results in the LNCC and LLNA were generally consistent, with 86% yielding an identical classification outcome. Discordant results were associated with borderline data and were evenly distributed between the two methods. Potency information derived from each method also demonstrated good consistency (n = 101), with 93% of predictions being close. Skin irritation was observed only infrequently and was most commonly associated with positive results; it was not associated with the discordant results. Where different vehicles were used with the same test material, the effect on sensitizing activity was modest, consistent with historical data. Analysis of positive control data indicated that the LNCC and LLNA displayed similar levels of biological variation. When taken in combination with the previously published results on LLNA Performance Standard chemicals, it is concluded that the LNCC provides a viable non-radioactive alternative to the LLNA for the assessment of substances, including potency predictions, as well as for the evaluation of preparations. Copyright © 2012 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kraiskii, A V; Mironova, T V
2015-08-31
The results of the study of interdiffusion of two liquids, obtained using the holographic recording scheme with a nonstationary reference wave with the frequency linearly varying in space and time are compared with the results of correlation processing of digital photographs, made with a random background screen. The spatio-temporal behaviour of the signal in four basic representations ('space – temporal frequency', 'space – time', 'spatial frequency – temporal frequency' and 'spatial frequency – time') is found in the holographic experiment and calculated (in the appropriate coordinates) based on the background-oriented schlieren method. Practical coincidence of the results of the correlationmore » analysis and the holographic double-exposure interferometry is demonstrated. (interferometry)« less
NASA Astrophysics Data System (ADS)
Wellons, Matthew S.
The design, synthesis, and characterization of magnetic alloy nanoparticles, supported formic acid oxidation catalysts, and superhard intermetallic composites are presented. Ferromagnetic equatomic alloy nanoparticles of FePt, FePd, and CoPt were synthesized utilizing single-source heteronuclear organometallic precursors supported on an inert water-soluble matrix. Direct conversion of the precursor-support composite to supported ferromagnetic nanoparticles occurs under elevated temperatures and reducing conditions with metal-ion reduction and minimal nanoparticle coalescence. Nanoparticles were easily extracted from the support by addition of water and characterized in structure and magnetic properties. Palladium and platinum based nanoparticles were synthesized with microwave-based and chemical metal-ion reduction strategies, respectively, and tested for catalytic performance in a direct formic acid fuel cell (DFAFC). A study of palladium carbide nanocomposites with various carbonaceous supports was conducted and demonstrated strong activity comparable to commercially available palladium black, but poor catalytic longevity. Platinum-lead alloy nanocomposites synthesized with chemical reduction and supported on Vulcan carbon demonstrated strong activity, excellent catalytic longevity, and were subsequently incorporated into a prototype DFAFC. A new method for the synthesis of superhard ceramics on polymer substrates called Confined Plasma Chemical Deposition (CPCD) was developed. The CPCD method utilizes a tuned Free Electron Laser to selectively decompose the single-source precursor, Re(CO)4(B3H8), in a plasma-like state resulting in the superhard intermetallic ReB2 deposited on polymer substrates. Extension of this method to the synthesis of other hard of superhard ceramics; WB4, RuB2, and B4C was demonstrated. These three areas of research show new synthetic methods and novel materials of technological importance, resulting in a substantial advance in their respective fields.
Initial Results of Illinois' Shifting Gears Pilot Demonstration Evaluation
ERIC Educational Resources Information Center
Bragg, Debra D.; Harmon, Timothy; Kirby, Catherine L.; Kim, Sujung
2009-01-01
This report provides initial results of Illinois' Shifting Gears Initiative that operated between July 1, 2007 and June 30, 2009. This mixed method (qualitative and quantitative) evaluation sought to accomplish three goals: (1) to assess program and student outcomes for two models (adult education and developmental education) for two target groups…
Bulik, Catharine C.; Fauntleroy, Kathy A.; Jenkins, Stephen G.; Abuali, Mayssa; LaBombardi, Vincent J.; Nicolau, David P.; Kuti, Joseph L.
2010-01-01
We describe the levels of agreement between broth microdilution, Etest, Vitek 2, Sensititre, and MicroScan methods to accurately define the meropenem MIC and categorical interpretation of susceptibility against carbapenemase-producing Klebsiella pneumoniae (KPC). A total of 46 clinical K. pneumoniae isolates with KPC genotypes, all modified Hodge test and blaKPC positive, collected from two hospitals in NY were included. Results obtained by each method were compared with those from broth microdilution (the reference method), and agreement was assessed based on MICs and Clinical Laboratory Standards Institute (CLSI) interpretative criteria using 2010 susceptibility breakpoints. Based on broth microdilution, 0%, 2.2%, and 97.8% of the KPC isolates were classified as susceptible, intermediate, and resistant to meropenem, respectively. Results from MicroScan demonstrated the most agreement with those from broth microdilution, with 95.6% agreement based on the MIC and 2.2% classified as minor errors, and no major or very major errors. Etest demonstrated 82.6% agreement with broth microdilution MICs, a very major error rate of 2.2%, and a minor error rate of 2.2%. Vitek 2 MIC agreement was 30.4%, with a 23.9% very major error rate and a 39.1% minor error rate. Sensititre demonstrated MIC agreement for 26.1% of isolates, with a 3% very major error rate and a 26.1% minor error rate. Application of FDA breakpoints had little effect on minor error rates but increased very major error rates to 58.7% for Vitek 2 and Sensititre. Meropenem MIC results and categorical interpretations for carbapenemase-producing K. pneumoniae differ by methodology. Confirmation of testing results is encouraged when an accurate MIC is required for antibiotic dosing optimization. PMID:20484603
Effects of absorption on multiple scattering by random particulate media: exact results.
Mishchenko, Michael I; Liu, Li; Hovenier, Joop W
2007-10-01
We employ the numerically exact superposition T-matrix method to perform extensive computations of elec nottromagnetic scattering by a volume of discrete random medium densely filled with increasingly absorbing as well as non-absorbing particles. Our numerical data demonstrate that increasing absorption diminishes and nearly extinguishes certain optical effects such as depolarization and coherent backscattering and increases the angular width of coherent backscattering patterns. This result corroborates the multiple-scattering origin of such effects and further demonstrates the heuristic value of the concept of multiple scattering even in application to densely packed particulate media.
Occupied Volume Integrity Testing : Elastic Test Results and Analyses
DOT National Transportation Integrated Search
2011-09-21
Federal Railroad Administration (FRA) and the Volpe Center have been conducting research into developing an alternative method of demonstrating occupied volume integrity (OVI) through a combination of testing and analysis. Previous works have been pu...
Credit Cards, Economization of Money, and Interest Rates.
ERIC Educational Resources Information Center
Steindl, Frank G.
2000-01-01
Focuses on the effect of interest rates on the increased use of credit cards, a popular method of financing households. Uses three models to demonstrate that interest rates must rise, resulting in increased consumption expenditures. (CMK)
[Closing the resection surface in left pancreatic resection with the surgical stapler].
Fuchs, M; Köhler, H; Schafmayer, A; Peiper, H J
1992-01-01
In the present paper a technique is demonstrated wherein the closure of the pancreatic remnant following left pancreatectomy with absorbable staples was performed. The good results with minimal complications recommend this method.
How-To-Do-It: Measuring Vegetation Biomass and Production.
ERIC Educational Resources Information Center
Collins, Don; Weaver, T.
1988-01-01
Describes a lab exercise used to demonstrate the measurement of biomass in a three layered forest. Discusses sampling, estimation methods, and the analysis of results. Presents an example of a summary sheet for this activity. (CW)
Evaluation results for intelligent transportation systems
DOT National Transportation Integrated Search
2000-11-09
This presentation covers the methods of evaluation set out for EC-funded ITS research and demonstration projects, known as the CONVERGE validation quality process and the lessons learned from that approach. The new approach to appraisal, which is bei...
Early Deployment Of Atms/Atis For Metropolitan Detroit, Final Report
DOT National Transportation Integrated Search
1994-09-26
TECHNOLOGY, ARCHITECTURE, CONTRACTING, AND DEPLOYMENT RECOMMENDATIONS RESULTING FROM THE STUDY ENABLE MDOT TO BEGIN SYSTEM DESIGN AND CONSTRUCTION. HOWEVER, IN ORDER TO DEMONSTRATE THE IMPLEMENTATION METHODS OF NEW ATMS/ATIS COMPONENTS AND SYSTEM ARC...
Using creation science to demonstrate evolution? Senter's strategy revisited.
Wood, T C
2011-04-01
Senter's strategy of arguing against creationism using their own methodology focused on demonstrating a morphological continuum between birds and nonavian dinosaurs using classical multidimensional scaling (CMDS), a method used by some creationists to assign species to assist in the detection of phylogenetic 'discontinuities.' Because creationists do not typically use CMDS in the manner Senter used it, his results were re-examined using 'distance correlation,' a method used to assign species to 'created kinds.' Distance correlation using Senter's set of taxa and characters supports his conclusion of morphological continuity, but other sets of taxa with more characters do not. These results lessen the potential impact that Senter's strategy might have on creationism; however, it is possible that future fossil discoveries will provide stronger support for morphological continuity between dinosaurs and birds. © 2011 The Author. Journal of Evolutionary Biology © 2011 European Society For Evolutionary Biology.
Hudson, Phillip S; Woodcock, H Lee; Boresch, Stefan
2015-12-03
Carrying out free energy simulations (FES) using quantum mechanical (QM) Hamiltonians remains an attractive, albeit elusive goal. Renewed efforts in this area have focused on using "indirect" thermodynamic cycles to connect "low level" simulation results to "high level" free energies. The main obstacle to computing converged free energy results between molecular mechanical (MM) and QM (ΔA(MM→QM)), as recently demonstrated by us and others, is differences in the so-called "stiff" degrees of freedom (e.g., bond stretching) between the respective energy surfaces. Herein, we demonstrate that this problem can be efficiently circumvented using nonequilibrium work (NEW) techniques, i.e., Jarzynski's and Crooks' equations. Initial applications of computing ΔA(NEW)(MM→QM), for blocked amino acids alanine and serine as well as to generate butane's potentials of mean force via the indirect QM/MM FES method, showed marked improvement over traditional FES approaches.
Friction-term response to boundary-condition type in flow models
Schaffranek, R.W.; Lai, C.
1996-01-01
The friction-slope term in the unsteady open-channel flow equations is examined using two numerical models based on different formulations of the governing equations and employing different solution methods. The purposes of the study are to analyze, evaluate, and demonstrate the behavior of the term in a set of controlled numerical experiments using varied types and combinations of boundary conditions. Results of numerical experiments illustrate that a given model can respond inconsistently for the identical resistance-coefficient value under different types and combinations of boundary conditions. Findings also demonstrate that two models employing different dependent variables and solution methods can respond similarly for the identical resistance-coefficient value under similar types and combinations of boundary conditions. Discussion of qualitative considerations and quantitative experimental results provides insight into the proper treatment, evaluation, and significance of the friction-slope term, thereby offering practical guidelines for model implementation and calibration.
NASA Astrophysics Data System (ADS)
Jian, Zhongping
This thesis describes the study of two-dimensional photonic crystals slabs with terahertz time domain spectroscopy. In our study we first demonstrate the realization of planar photonic components to manipulate terahertz waves, and then characterize photonic crystals using terahertz pulses. Photonic crystal slabs at the scale of micrometers are first designed and fabricated free of defects. Terahertz time domain spectrometer generates and detects the electric fields of single-cycle terahertz pulses. By putting photonic crystals into waveguide geometry, we successfully demonstrate planar photonic components such as transmission filters, reflection frequency-selective filters, defects modes as well as superprisms. In the characterization study of out-of-plane properties of photonic crystal slabs, we observe very strong dispersion at low frequencies, guided resonance modes at middle frequencies, and a group velocity anomaly at high frequencies. We employ Finite Element Method and Finite-Difference Time-Domain method to simulate the photonic crystals, and excellent agreement is achieved between simulation results and experimental results.
Benchmark On Sensitivity Calculation (Phase III)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanova, Tatiana; Laville, Cedric; Dyrda, James
2012-01-01
The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impactmore » the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.« less
Diverse Region-Based CNN for Hyperspectral Image Classification.
Zhang, Mengmeng; Li, Wei; Du, Qian
2018-06-01
Convolutional neural network (CNN) is of great interest in machine learning and has demonstrated excellent performance in hyperspectral image classification. In this paper, we propose a classification framework, called diverse region-based CNN, which can encode semantic context-aware representation to obtain promising features. With merging a diverse set of discriminative appearance factors, the resulting CNN-based representation exhibits spatial-spectral context sensitivity that is essential for accurate pixel classification. The proposed method exploiting diverse region-based inputs to learn contextual interactional features is expected to have more discriminative power. The joint representation containing rich spectral and spatial information is then fed to a fully connected network and the label of each pixel vector is predicted by a softmax layer. Experimental results with widely used hyperspectral image data sets demonstrate that the proposed method can surpass any other conventional deep learning-based classifiers and other state-of-the-art classifiers.
First On-Wafer Power Characterization of MMIC Amplifiers at Sub-Millimeter Wave Frequencies
NASA Technical Reports Server (NTRS)
Fung, A. K.; Gaier, T.; Samoska, L.; Deal, W. R.; Radisic, V.; Mei, X. B.; Yoshida, W.; Liu, P. S.; Uyeda, J.; Barsky, M.;
2008-01-01
Recent developments in semiconductor technology have enabled advanced submillimeter wave (300 GHz) transistors and circuits. These new high speed components have required new test methods to be developed for characterizing performance, and to provide data for device modeling to improve designs. Current efforts in progressing high frequency testing have resulted in on-wafer-parameter measurements up to approximately 340 GHz and swept frequency vector network analyzer waveguide measurements to 508 GHz. On-wafer noise figure measurements in the 270-340 GHz band have been demonstrated. In this letter we report on on-wafer power measurements at 330 GHz of a three stage amplifier that resulted in a maximum measured output power of 1.78mW and maximum gain of 7.1 dB. The method utilized demonstrates the extension of traditional power measurement techniques to submillimeter wave frequencies, and is suitable for automated testing without packaging for production screening of submillimeter wave circuits.
DuPont Qualicon BAX System assay for genus Listeria 24E.
Wallace, F Morgan; Fallon, Dawn; DeMarco, Daniel; Varkey, Stephen
2011-01-01
The new BAX System PCR Assay for Genus Listeria 24E was evaluated for detecting Listeria spp. in frankfurters, spinach, cooked shrimp, queso fresco cheese, and on stainless steel surfaces with a single-stage enrichment in BAX System 24 Listeria Enrichment Broth (24 LEB). Method comparison studies performed on samples with low-level inoculates showed that the BAX System demonstrates a sensitivity equivalent or superior to the U.S. Food and Drug Administration's Bacteriological Analytical Manual and the U.S. Department of Agriculture-Food Safety and Inspection Service culture methods, but with a significantly shorter time to result. Tests to evaluate inclusivity and exclusivity returned no false-negative and no false-positive results on a diverse panel of isolates, and tests for lot-to-lot variability and tablet stability demonstrated consistent performance. Ruggedness studies determined that none of the factors examined, within the range of deviations from specified parameters examined, affect the performance of the assay.
Generation of nondiffracting Bessel beam using digital micromirror device.
Gong, Lei; Ren, Yu-Xuan; Xue, Guo-Sheng; Wang, Qian-Chang; Zhou, Jin-Hua; Zhong, Min-Cheng; Wang, Zi-Qiang; Li, Yin-Mei
2013-07-01
We experimentally demonstrated Bessel-like beams utilizing digital micromirror device (DMD). DMD with images imitating the equivalent axicon can shape the collimated Gaussian beam into Bessel beam. We reconstructed the 3D spatial field of the generated beam through a stack of measured cross-sectional images. The output beams have the profile of Bessel function after intensity modulation, and the beams extend at least 50 mm while the lateral dimension of the spot remains nearly invariant. Furthermore, the self-healing property has also been investigated, and all the experimental results agree well with simulated results numerically calculated through beam propagation method. Our observations demonstrate that the DMD offers a simple and efficient method to generate Bessel beams with distinct nondiffracting and self-reconstruction behaviors. The generated Bessel beams will potentially expand the applications to the optical manipulation and high-resolution fluorescence imaging owing to the unique nondiffracting property.
Ashtiani Haghighi, Donya; Mobayen, Saleh
2018-04-01
This paper proposes an adaptive super-twisting decoupled terminal sliding mode control technique for a class of fourth-order systems. The adaptive-tuning law eliminates the requirement of the knowledge about the upper bounds of external perturbations. Using the proposed control procedure, the state variables of cart-pole system are converged to decoupled terminal sliding surfaces and their equilibrium points in the finite time. Moreover, via the super-twisting algorithm, the chattering phenomenon is avoided without affecting the control performance. The numerical results demonstrate the high stabilization accuracy and lower performance indices values of the suggested method over the other ones. The simulation results on the cart-pole system as well as experimental validations demonstrate that the proposed control technique exhibits a reasonable performance in comparison with the other methods. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Hospital non-price competition under the Global Budget Payment and Prospective Payment Systems.
Chen, Wen-Yi; Lin, Yu-Hui
2008-06-01
This paper provides theoretical analyses of two alternative hospital payment systems for controlling medical cost: the Global Budget Payment System (GBPS) and the Prospective Payment System (PPS). The former method assigns a fixed total budget for all healthcare services over a given period with hospitals being paid on a fee-for-service basis. The latter method is usually connected with a fixed payment to hospitals within a Diagnosis-Related Group. Our results demonstrate that, given the same expenditure, the GBPS would approach optimal levels of quality and efficiency as well as the level of social welfare provided by the PPS, as long as market competition is sufficiently high; our results also demonstrate that the treadmill effect, modeling an inverse relationship between price and quantity under the GBPS, would be a quality-enhancing and efficiency-improving outcome due to market competition.
NASA Technical Reports Server (NTRS)
Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)
2002-01-01
The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.
NASA Astrophysics Data System (ADS)
McGillivray, Max Falkenberg; Cheng, William; Peters, Nicholas S.; Christensen, Kim
2018-04-01
Mapping resolution has recently been identified as a key limitation in successfully locating the drivers of atrial fibrillation (AF). Using a simple cellular automata model of AF, we demonstrate a method by which re-entrant drivers can be located quickly and accurately using a collection of indirect electrogram measurements. The method proposed employs simple, out-of-the-box machine learning algorithms to correlate characteristic electrogram gradients with the displacement of an electrogram recording from a re-entrant driver. Such a method is less sensitive to local fluctuations in electrical activity. As a result, the method successfully locates 95.4% of drivers in tissues containing a single driver, and 95.1% (92.6%) for the first (second) driver in tissues containing two drivers of AF. Additionally, we demonstrate how the technique can be applied to tissues with an arbitrary number of drivers. In its current form, the techniques presented are not refined enough for a clinical setting. However, the methods proposed offer a promising path for future investigations aimed at improving targeted ablation for AF.
Iterative optimization method for design of quantitative magnetization transfer imaging experiments.
Levesque, Ives R; Sled, John G; Pike, G Bruce
2011-09-01
Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.
Application of optical broadband monitoring to quasi-rugate filters by ion-beam sputtering
NASA Astrophysics Data System (ADS)
Lappschies, Marc; Görtz, Björn; Ristau, Detlev
2006-03-01
Methods for the manufacture of rugate filters by the ion-beam-sputtering process are presented. The first approach gives an example of a digitized version of a continuous-layer notch filter. This method allows the comparison of the basic theory of interference coatings containing thin layers with practical results. For the other methods, a movable zone target is employed to fabricate graded and gradual rugate filters. The examples demonstrate the potential of broadband optical monitoring in conjunction with the ion-beam-sputtering process. First-characterization results indicate that these types of filter may exhibit higher laser-induced damage-threshold values than those of classical filters.
Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals
Zhao, Ziyue; Liu, Congfeng
2014-01-01
In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method. PMID:27382610
Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals.
Zhao, Ziyue; Liu, Congfeng
2014-01-01
In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method.
NASA Technical Reports Server (NTRS)
Lundberg, J. B.; Feulner, M. R.; Abusali, P. A. M.; Ho, C. S.
1991-01-01
The method of modified back differences, a technique that significantly reduces the numerical integration errors associated with crossing shadow boundaries using a fixed-mesh multistep integrator without a significant increase in computer run time, is presented. While Hubbard's integral approach can produce significant improvements to the trajectory solution, the interpolation method provides the best overall results. It is demonstrated that iterating on the point mass term correction is also important for achieving the best overall results. It is also shown that the method of modified back differences can be implemented with only a small increase in execution time.
Settgast, Randolph R.; Fu, Pengcheng; Walsh, Stuart D. C.; ...
2016-09-18
This study describes a fully coupled finite element/finite volume approach for simulating field-scale hydraulically driven fractures in three dimensions, using massively parallel computing platforms. The proposed method is capable of capturing realistic representations of local heterogeneities, layering and natural fracture networks in a reservoir. A detailed description of the numerical implementation is provided, along with numerical studies comparing the model with both analytical solutions and experimental results. The results demonstrate the effectiveness of the proposed method for modeling large-scale problems involving hydraulically driven fractures in three dimensions.
NASA Technical Reports Server (NTRS)
Pepin, T. J.
1977-01-01
The inversion methods are reported that have been used to determine the vertical profile of the extinction coefficient due to the stratospheric aerosols from data measured during the ASTP/SAM solar occultation experiment. Inversion methods include the onion skin peel technique and methods of solving the Fredholm equation for the problem subject to smoothing constraints. The latter of these approaches involves a double inversion scheme. Comparisons are made between the inverted results from the SAM experiment and near simultaneous measurements made by lidar and balloon born dustsonde. The results are used to demonstrate the assumptions required to perform the inversions for aerosols.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1986-01-01
An abstract approximation theory and computational methods are developed for the determination of optimal linear-quadratic feedback control, observers and compensators for infinite dimensional discrete-time systems. Particular attention is paid to systems whose open-loop dynamics are described by semigroups of operators on Hilbert spaces. The approach taken is based on the finite dimensional approximation of the infinite dimensional operator Riccati equations which characterize the optimal feedback control and observer gains. Theoretical convergence results are presented and discussed. Numerical results for an example involving a heat equation with boundary control are presented and used to demonstrate the feasibility of the method.
Application of XGBoost algorithm in hourly PM2.5 concentration prediction
NASA Astrophysics Data System (ADS)
Pan, Bingyue
2018-02-01
In view of prediction techniques of hourly PM2.5 concentration in China, this paper applied the XGBoost(Extreme Gradient Boosting) algorithm to predict hourly PM2.5 concentration. The monitoring data of air quality in Tianjin city was analyzed by using XGBoost algorithm. The prediction performance of the XGBoost method is evaluated by comparing observed and predicted PM2.5 concentration using three measures of forecast accuracy. The XGBoost method is also compared with the random forest algorithm, multiple linear regression, decision tree regression and support vector machines for regression models using computational results. The results demonstrate that the XGBoost algorithm outperforms other data mining methods.
THE SEDIMENTATION PROPERTIES OF THE SKIN-SENSITIZING ANTIBODIES OF RAGWEED-SENSITIVE PATIENTS
Andersen, Burton R.; Vannier, Wilton E.
1964-01-01
The sedimentation coefficients of the skin-sensitizing antibodies to ragweed were evaluated by the moving partition cell method and the sucrose density gradient method. The most reliable results were obtained by sucrose density gradient ultracentrifugation which showed that the major portion of skin-sensitizing antibodies to ragweed sediment with an average value of 7.7S (7.4 to 7.9S). This is about one S unit faster than γ-globulins (6.8S). The data from the moving partition cell method are in agreement with these results. Our studies failed to demonstrate heterogeneity of the skin-sensitizing antibodies with regard to sedimentation rate. PMID:14194391
DOE Office of Scientific and Technical Information (OSTI.GOV)
Settgast, Randolph R.; Fu, Pengcheng; Walsh, Stuart D. C.
This study describes a fully coupled finite element/finite volume approach for simulating field-scale hydraulically driven fractures in three dimensions, using massively parallel computing platforms. The proposed method is capable of capturing realistic representations of local heterogeneities, layering and natural fracture networks in a reservoir. A detailed description of the numerical implementation is provided, along with numerical studies comparing the model with both analytical solutions and experimental results. The results demonstrate the effectiveness of the proposed method for modeling large-scale problems involving hydraulically driven fractures in three dimensions.
Xu, Wenjun; Tang, Chen; Gu, Fan; Cheng, Jiajia
2017-04-01
It is a key step to remove the massive speckle noise in electronic speckle pattern interferometry (ESPI) fringe patterns. In the spatial-domain filtering methods, oriented partial differential equations have been demonstrated to be a powerful tool. In the transform-domain filtering methods, the shearlet transform is a state-of-the-art method. In this paper, we propose a filtering method for ESPI fringe patterns denoising, which is a combination of second-order oriented partial differential equation (SOOPDE) and the shearlet transform, named SOOPDE-Shearlet. Here, the shearlet transform is introduced into the ESPI fringe patterns denoising for the first time. This combination takes advantage of the fact that the spatial-domain filtering method SOOPDE and the transform-domain filtering method shearlet transform benefit from each other. We test the proposed SOOPDE-Shearlet on five experimentally obtained ESPI fringe patterns with poor quality and compare our method with SOOPDE, shearlet transform, windowed Fourier filtering (WFF), and coherence-enhancing diffusion (CEDPDE). Among them, WFF and CEDPDE are the state-of-the-art methods for ESPI fringe patterns denoising in transform domain and spatial domain, respectively. The experimental results have demonstrated the good performance of the proposed SOOPDE-Shearlet.
Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2016-01-01
A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.
Estimation of Noise Properties for TV-regularized Image Reconstruction in Computed Tomography
Sánchez, Adrian A.
2016-01-01
A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128 × 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR. PMID:26308968
Demonstration of arbitrary views based on autostereoscopic three-dimensional display system
NASA Astrophysics Data System (ADS)
Liu, Boyang; Sang, Xinzhu; Yu, Xunbo; Li, Liu; Yang, Le; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu
2017-10-01
A method to realize arbitrary views for the lenticular lens array based on autostereoscopic three-dimensional display system is demonstrated. Normally, the number of views is proportional to pitch of the lenticular lens array. Increasing the number of views will result in reducing resolution and enhancing of granular sensation. 32 dense views can be achieved with one lenticular lens pitch covering 5.333 sub-pixels, which does significantly increases the number of views without affecting the resolution. But the structure of pitch and the number of views are fixed. Here, the 3D display method that the number of views can be changed artificially for most structures of lenticular lens is presented. Compared with the previous 32 views display method, the smoothness of motion parallex and the display depth of field are significantly improved.
Noninvasive measurement of glucose concentration on human fingertip by optical coherence tomography
NASA Astrophysics Data System (ADS)
Chen, Tseng-Lin; Lo, Yu-Lung; Liao, Chia-Chi; Phan, Quoc-Hung
2018-04-01
A method is proposed for determining the glucose concentration on the human fingertip by extracting two optical parameters, namely the optical rotation angle and the depolarization index, using a Mueller optical coherence tomography technique and a genetic algorithm. The feasibility of the proposed method is demonstrated by measuring the optical rotation angle and depolarization index of aqueous glucose solutions with low and high scattering, respectively. It is shown that for both solutions, the optical rotation angle and depolarization index vary approximately linearly with the glucose concentration. As a result, the ability of the proposed method to obtain the glucose concentration by means of just two optical parameters is confirmed. The practical applicability of the proposed technique is demonstrated by measuring the optical rotation angle and depolarization index on the human fingertip of healthy volunteers under various glucose conditions.
Large-cell Monte Carlo renormalization of irreversible growth processes
NASA Technical Reports Server (NTRS)
Nakanishi, H.; Family, F.
1985-01-01
Monte Carlo sampling is applied to a recently formulated direct-cell renormalization method for irreversible, disorderly growth processes. Large-cell Monte Carlo renormalization is carried out for various nonequilibrium problems based on the formulation dealing with relative probabilities. Specifically, the method is demonstrated by application to the 'true' self-avoiding walk and the Eden model of growing animals for d = 2, 3, and 4 and to the invasion percolation problem for d = 2 and 3. The results are asymptotically in agreement with expectations; however, unexpected complications arise, suggesting the possibility of crossovers, and in any case, demonstrating the danger of using small cells alone, because of the very slow convergence as the cell size b is extrapolated to infinity. The difficulty of applying the present method to the diffusion-limited-aggregation model, is commented on.
Estimation of noise properties for TV-regularized image reconstruction in computed tomography.
Sánchez, Adrian A
2015-09-21
A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128 × 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR.
Estimation of noise properties for TV-regularized image reconstruction in computed tomography
NASA Astrophysics Data System (ADS)
Sánchez, Adrian A.
2015-09-01
A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128× 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR.
Theoretical analysis of HVAC duct hanger systems
NASA Technical Reports Server (NTRS)
Miller, R. D.
1987-01-01
Several methods are presented which, together, may be used in the analysis of duct hanger systems over a wide range of frequencies. The finite element method (FEM) and component mode synthesis (CMS) method are used for low- to mid-frequency range computations and have been shown to yield reasonably close results. The statistical energy analysis (SEA) method yields predictions which agree with the CMS results for the 800 to 1000 Hz range provided that a sufficient number of modes participate. The CMS approach has been shown to yield valuable insight into the mid-frequency range of the analysis. It has been demonstrated that it is possible to conduct an analysis of a duct/hanger system in a cost-effective way for a wide frequency range, using several methods which overlap for several frequency bands.
Relation extraction for biological pathway construction using node2vec.
Kim, Munui; Baek, Seung Han; Song, Min
2018-06-13
Systems biology is an important field for understanding whole biological mechanisms composed of interactions between biological components. One approach for understanding complex and diverse mechanisms is to analyze biological pathways. However, because these pathways consist of important interactions and information on these interactions is disseminated in a large number of biomedical reports, text-mining techniques are essential for extracting these relationships automatically. In this study, we applied node2vec, an algorithmic framework for feature learning in networks, for relationship extraction. To this end, we extracted genes from paper abstracts using pkde4j, a text-mining tool for detecting entities and relationships. Using the extracted genes, a co-occurrence network was constructed and node2vec was used with the network to generate a latent representation. To demonstrate the efficacy of node2vec in extracting relationships between genes, performance was evaluated for gene-gene interactions involved in a type 2 diabetes pathway. Moreover, we compared the results of node2vec to those of baseline methods such as co-occurrence and DeepWalk. Node2vec outperformed existing methods in detecting relationships in the type 2 diabetes pathway, demonstrating that this method is appropriate for capturing the relatedness between pairs of biological entities involved in biological pathways. The results demonstrated that node2vec is useful for automatic pathway construction.
Computational composite mechanics for aerospace propulsion structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1986-01-01
Specialty methods are presented for the computational simulation of specific composite behavior. These methods encompass all aspects of composite mechanics, impact, progressive fracture and component specific simulation. Some of these methods are structured to computationally simulate, in parallel, the composite behavior and history from the initial fabrication through several missions and even to fracture. Select methods and typical results obtained from such simulations are described in detail in order to demonstrate the effectiveness of computationally simulating (1) complex composite structural behavior in general and (2) specific aerospace propulsion structural components in particular.
On the equivalence of LIST and DIIS methods for convergence acceleration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garza, Alejandro J.; Scuseria, Gustavo E.
2015-04-28
Self-consistent field extrapolation methods play a pivotal role in quantum chemistry and electronic structure theory. We, here, demonstrate the mathematical equivalence between the recently proposed family of LIST methods [Wang et al., J. Chem. Phys. 134, 241103 (2011); Y. K. Chen and Y. A. Wang, J. Chem. Theory Comput. 7, 3045 (2011)] and the general form of Pulay’s DIIS [Chem. Phys. Lett. 73, 393 (1980); J. Comput. Chem. 3, 556 (1982)] with specific error vectors. Our results also explain the differences in performance among the various LIST methods.
Discriminative Projection Selection Based Face Image Hashing
NASA Astrophysics Data System (ADS)
Karabat, Cagatay; Erdogan, Hakan
Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.
Computational composite mechanics for aerospace propulsion structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1987-01-01
Specialty methods are presented for the computational simulation of specific composite behavior. These methods encompass all aspects of composite mechanics, impact, progressive fracture and component specific simulation. Some of these methods are structured to computationally simulate, in parallel, the composite behavior and history from the initial frabrication through several missions and even to fracture. Select methods and typical results obtained from such simulations are described in detail in order to demonstrate the effectiveness of computationally simulating: (1) complex composite structural behavior in general, and (2) specific aerospace propulsion structural components in particular.
A coherent discrete variable representation method on a sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Hua -Gen
Here, the coherent discrete variable representation (ZDVR) has been extended for construct- ing a multidimensional potential-optimized DVR basis on a sphere. In order to deal with the non-constant Jacobian in spherical angles, two direct product primitive basis methods are proposed so that the original ZDVR technique can be properly implemented. The method has been demonstrated by computing the lowest states of a two dimensional (2D) vibrational model. Results show that the extended ZDVR method gives accurate eigenval- ues and exponential convergence with increasing ZDVR basis size.
A coherent discrete variable representation method on a sphere
Yu, Hua -Gen
2017-09-05
Here, the coherent discrete variable representation (ZDVR) has been extended for construct- ing a multidimensional potential-optimized DVR basis on a sphere. In order to deal with the non-constant Jacobian in spherical angles, two direct product primitive basis methods are proposed so that the original ZDVR technique can be properly implemented. The method has been demonstrated by computing the lowest states of a two dimensional (2D) vibrational model. Results show that the extended ZDVR method gives accurate eigenval- ues and exponential convergence with increasing ZDVR basis size.
A weight modification sequential method for VSC-MTDC power system state estimation
NASA Astrophysics Data System (ADS)
Yang, Xiaonan; Zhang, Hao; Li, Qiang; Guo, Ziming; Zhao, Kun; Li, Xinpeng; Han, Feng
2017-06-01
This paper presents an effective sequential approach based on weight modification for VSC-MTDC power system state estimation, called weight modification sequential method. The proposed approach simplifies the AC/DC system state estimation algorithm through modifying the weight of state quantity to keep the matrix dimension constant. The weight modification sequential method can also make the VSC-MTDC system state estimation calculation results more ccurate and increase the speed of calculation. The effectiveness of the proposed weight modification sequential method is demonstrated and validated in modified IEEE 14 bus system.
Analytic method for calculating properties of random walks on networks
NASA Technical Reports Server (NTRS)
Goldhirsch, I.; Gefen, Y.
1986-01-01
A method for calculating the properties of discrete random walks on networks is presented. The method divides complex networks into simpler units whose contribution to the mean first-passage time is calculated. The simplified network is then further iterated. The method is demonstrated by calculating mean first-passage times on a segment, a segment with a single dangling bond, a segment with many dangling bonds, and a looplike structure. The results are analyzed and related to the applicability of the Einstein relation between conductance and diffusion.
The Second NASA Formal Methods Workshop 1992
NASA Technical Reports Server (NTRS)
Johnson, Sally C. (Compiler); Holloway, C. Michael (Compiler); Butler, Ricky W. (Compiler)
1992-01-01
The primary goal of the workshop was to bring together formal methods researchers and aerospace industry engineers to investigate new opportunities for applying formal methods to aerospace problems. The first part of the workshop was tutorial in nature. The second part of the workshop explored the potential of formal methods to address current aerospace design and verification problems. The third part of the workshop involved on-line demonstrations of state-of-the-art formal verification tools. Also, a detailed survey was filled in by the attendees; the results of the survey are compiled.
Rolling bearing fault diagnosis based on information fusion using Dempster-Shafer evidence theory
NASA Astrophysics Data System (ADS)
Pei, Di; Yue, Jianhai; Jiao, Jing
2017-10-01
This paper presents a fault diagnosis method for rolling bearing based on information fusion. Acceleration sensors are arranged at different position to get bearing vibration data as diagnostic evidence. The Dempster-Shafer (D-S) evidence theory is used to fuse multi-sensor data to improve diagnostic accuracy. The efficiency of the proposed method is demonstrated by the high speed train transmission test bench. The results of experiment show that the proposed method in this paper improves the rolling bearing fault diagnosis accuracy compared with traditional signal analysis methods.
Watson, Stuart K; Lambeth, Susan P; Schapiro, Steven J; Whiten, Andrew
2018-05-01
How animal communities arrive at homogeneous behavioural preferences is a central question for studies of cultural evolution. Here, we investigated whether chimpanzees (Pan troglodytes) would relinquish a pre-existing behaviour to adopt an alternative demonstrated by an overwhelming majority of group mates; in other words, whether chimpanzees behave in a conformist manner. In each of five groups of chimpanzees (N = 37), one individual was trained on one method of opening a two-action puzzle box to obtain food, while the remaining individuals learned the alternative method. Over 5 h of open access to the apparatus in a group context, it was found that 4/5 'minority' individuals explored the majority method and three of these used this new method in the majority of trials. Those that switched did so after observing only a small subset of their group, thereby not matching conventional definitions of conformity. In a further 'Dyad' condition, six pairs of chimpanzees were trained on alternative methods and then given access to the task together. Only one of these individuals ever switched method. The number of observations that individuals in the minority and Dyad individuals made of their untrained method was not found to influence whether or not they themselves switched to use it. In a final 'Asocial' condition, individuals (N = 10) did not receive social information and did not deviate from their first-learned method. We argue that these results demonstrate an important influence of social context upon prioritisation of social information over pre-existing methods, which can result in group homogeneity of behaviour.
A YinYang bipolar fuzzy cognitive TOPSIS method to bipolar disorder diagnosis.
Han, Ying; Lu, Zhenyu; Du, Zhenguang; Luo, Qi; Chen, Sheng
2018-05-01
Bipolar disorder is often mis-diagnosed as unipolar depression in the clinical diagnosis. The main reason is that, different from other diseases, bipolarity is the norm rather than exception in bipolar disorder diagnosis. YinYang bipolar fuzzy set captures bipolarity and has been successfully used to construct a unified inference mathematical modeling method to bipolar disorder clinical diagnosis. Nevertheless, symptoms and their interrelationships are not considered in the existing method, circumventing its ability to describe complexity of bipolar disorder. Thus, in this paper, a YinYang bipolar fuzzy multi-criteria group decision making method to bipolar disorder clinical diagnosis is developed. Comparing with the existing method, the new one is more comprehensive. The merits of the new method are listed as follows: First of all, multi-criteria group decision making method is introduced into bipolar disorder diagnosis for considering different symptoms and multiple doctors' opinions. Secondly, the discreet diagnosis principle is adopted by the revised TOPSIS method. Last but not the least, YinYang bipolar fuzzy cognitive map is provided for the understanding of interrelations among symptoms. The illustrated case demonstrates the feasibility, validity, and necessity of the theoretical results obtained. Moreover, the comparison analysis demonstrates that the diagnosis result is more accurate, when interrelations about symptoms are considered in the proposed method. In a conclusion, the main contribution of this paper is to provide a comprehensive mathematical approach to improve the accuracy of bipolar disorder clinical diagnosis, in which both bipolarity and complexity are considered. Copyright © 2018 Elsevier B.V. All rights reserved.
Preliminary Design of Low-Thrust Interplanetary Missions
NASA Technical Reports Server (NTRS)
Sims, Jon A.; Flanagan, Steve N.
1997-01-01
For interplanetary missions, highly efficient electric propulsion systems can be used to increase the mass delivered to the destination and/or reduce the trip time over typical chemical propulsion systems. This technology is being demonstrated on the Deep Space 1 mission - part of NASA's New Millennium Program validating technologies which can lower the cost and risk and enhance the performance of future missions. With the successful demonstration on Deep Space 1, future missions can consider electric propulsion as a viable propulsion option. Electric propulsion systems, while highly efficient, produce only a small amount of thrust. As a result, the engines operate during a significant fraction of the trajectory. This characteristic makes it much more difficult to find optimal trajectories. The methods for optimizing low-thrust trajectories are typically categorized as either indirect, or direct. Indirect methods are based on calculus of variations, resulting in a two-point boundary value problem that is solved by satisfying terminal constraints and targeting conditions. These methods are subject to extreme sensitivity to the initial guess of the variables - some of which are not physically intuitive. Adding a gravity assist to the trajectory compounds the sensitivity. Direct methods parameterize the problem and use nonlinear programming techniques to optimize an objective function by adjusting a set of variables. A variety of methods of this type have been examined with varying results. These methods are subject to the limitations of the nonlinear programming techniques. In this paper we present a direct method intended to be used primarily for preliminary design of low-thrust interplanetary trajectories, including those with multiple gravity assists. Preliminary design implies a willingness to accept limited accuracy to achieve an efficient algorithm that executes quickly.
NASA Astrophysics Data System (ADS)
Konegger, T.; Schneider, P.; Bauer, V.; Amsüss, A.; Liersch, A.
2013-12-01
The effect of four distinct methods of incorporating fillers into a preceramic polymer matrix was investigated with respect to the structural and mechanical properties of the resulting materials. Investigations were conducted with a polysiloxane/Al2O3/ZrO2 model system used as a precursor for mullite/ZrO2 composites. A quantitative evaluation of the uniformity of filler distribution was obtained by employing a novel image analysis. While solvent-free mixing led to a heterogeneous distribution of constituents resulting in limited mechanical property values, a strong improvement of material homogeneity and properties was obtained by using solvent-assisted methods. The results demonstrate the importance of the processing route on final characteristics of polymer-derived ceramics.
NASA Astrophysics Data System (ADS)
Alhossen, I.; Villeneuve-Faure, C.; Baudoin, F.; Bugarin, F.; Segonds, S.
2017-01-01
Previous studies have demonstrated that the electrostatic force distance curve (EFDC) is a relevant way of probing injected charge in 3D. However, the EFDC needs a thorough investigation to be accurately analyzed and to provide information about charge localization. Interpreting the EFDC in terms of charge distribution is not straightforward from an experimental point of view. In this paper, a sensitivity analysis of the EFDC is studied using buried electrodes as a first approximation. In particular, the influence of input factors such as the electrode width, depth and applied potential are investigated. To reach this goal, the EFDC is fitted to a law described by four parameters, called logistic law, and the influence of the electrode parameters on the law parameters has been investigated. Then, two methods are applied—Sobol’s method and the factorial design of experiment—to quantify the effect of each factor on each parameter of the logistic law. Complementary results are obtained from both methods, demonstrating that the EFDC is not the result of the superposition of the contribution of each electrode parameter, but that it exhibits a strong contribution from electrode parameter interaction. Furthermore, thanks to these results, a matricial model has been developed to predict EFDCs for any combination of electrode characteristics. A good correlation is observed with the experiments, and this is promising for charge investigation using an EFDC.
Adaptive wavefront sensor based on the Talbot phenomenon.
Podanchuk, Dmytro V; Goloborodko, Andrey A; Kotov, Myhailo M; Kovalenko, Andrey V; Kurashov, Vitalij N; Dan'ko, Volodymyr P
2016-04-20
A new adaptive method of wavefront sensing is proposed and demonstrated. The method is based on the Talbot self-imaging effect, which is observed in an illuminating light beam with strong second-order aberration. Compensation of defocus and astigmatism is achieved with an appropriate choice of size of the rectangular unit cell of the diffraction grating, which is performed iteratively. A liquid-crystal spatial light modulator is used for this purpose. Self-imaging of rectangular grating in the astigmatic light beam is demonstrated experimentally. High-order aberrations are detected with respect to the compensated second-order aberration. The comparative results of wavefront sensing with a Shack-Hartmann sensor and the proposed sensor are adduced.
Phan, Quoc-Hung; Lo, Yu-Lung
2017-06-26
A differential Mueller matrix polarimetry technique is proposed for obtaining non-invasive (NI) measurements of the glucose concentration on the human fingertip. The feasibility of the proposed method is demonstrated by detecting the optical rotation angle and depolarization index of tissue phantom samples containing de-ionized water (DI), glucose solutions with concentrations ranging from 0~500 mg/dL and 2% lipofundin. The results show that the extracted optical rotation angle increases linearly with an increasing glucose concentration, while the depolarization index decreases. The practical applicability of the proposed method is demonstrated by measuring the optical rotation angle and depolarization index properties of the human fingertips of healthy volunteers.
Entropic multi-relaxation free-energy lattice Boltzmann model for two-phase flows
NASA Astrophysics Data System (ADS)
Bösch, F.; Dorschner, B.; Karlin, I.
2018-04-01
The entropic multi-relaxation lattice Boltzmann method is extended to two-phase systems following the free-energy approach. Gain in stability is achieved by incorporating the force term due to Korteweg's stress into the redefined entropic stabilizer, which allows simulation of higher Weber and Reynolds numbers with an efficient and explicit algorithm. Results for head-on droplet collisions and droplet impact on super-hydrophobic substrates are matching experimental data accurately. Furthermore, it is demonstrated that the entropic stabilization leads to smaller spurious currents without affecting the interface thickness. The present findings demonstrate the universality of the simple and explicit entropic lattice Boltzmann models and provide a viable and robust alternative to existing methods.
Development of AlN/Epoxy Composites with Enhanced Thermal Conductivity.
Xu, Yonggang; Yang, Chi; Li, Jun; Mao, Xiaojian; Zhang, Hailong; Hu, Song; Wang, Shiwei
2017-12-18
AlN/epoxy composites with high thermal conductivity were successfully prepared by infiltrating epoxy into AlN porous ceramics which were fabricated by gelcasting of foaming method. The microstructure, mechanical, and thermal properties of the resulting composites were investigated. The compressive strengths of the AlN/epoxy composites were enhanced compared with the pure epoxy. The AlN/epoxy composites demonstrate much higher thermal conductivity, up to 19.0 W/(m·K), compared with those by the traditional particles filling method, because of continuous thermal channels formed by the walls and struts of AlN porous ceramics. This study demonstrates a potential route to manufacture epoxy-based composites with extremely high thermal conductivity.
Development of AlN/Epoxy Composites with Enhanced Thermal Conductivity
Xu, Yonggang; Yang, Chi; Li, Jun; Zhang, Hailong; Hu, Song; Wang, Shiwei
2017-01-01
AlN/epoxy composites with high thermal conductivity were successfully prepared by infiltrating epoxy into AlN porous ceramics which were fabricated by gelcasting of foaming method. The microstructure, mechanical, and thermal properties of the resulting composites were investigated. The compressive strengths of the AlN/epoxy composites were enhanced compared with the pure epoxy. The AlN/epoxy composites demonstrate much higher thermal conductivity, up to 19.0 W/(m·K), compared with those by the traditional particles filling method, because of continuous thermal channels formed by the walls and struts of AlN porous ceramics. This study demonstrates a potential route to manufacture epoxy-based composites with extremely high thermal conductivity. PMID:29258277
Improving Causal Inferences in Meta-analyses of Longitudinal Studies: Spanking as an Illustration.
Larzelere, Robert E; Gunnoe, Marjorie Lindner; Ferguson, Christopher J
2018-05-24
To evaluate and improve the validity of causal inferences from meta-analyses of longitudinal studies, two adjustments for Time-1 outcome scores and a temporally backwards test are demonstrated. Causal inferences would be supported by robust results across both adjustment methods, distinct from results run backwards. A systematic strategy for evaluating potential confounds is also introduced. The methods are illustrated by assessing the impact of spanking on subsequent externalizing problems (child age: 18 months to 11 years). Significant results indicated a small risk or a small benefit of spanking, depending on the adjustment method. These meta-analytic methods are applicable for research on alternatives to spanking and other developmental science topics. The underlying principles can also improve causal inferences in individual studies. © 2018 Society for Research in Child Development.
Region-based multi-step optic disk and cup segmentation from color fundus image
NASA Astrophysics Data System (ADS)
Xiao, Di; Lock, Jane; Manresa, Javier Moreno; Vignarajan, Janardhan; Tay-Kearney, Mei-Ling; Kanagasingam, Yogesan
2013-02-01
Retinal optic cup-disk-ratio (CDR) is a one of important indicators of glaucomatous neuropathy. In this paper, we propose a novel multi-step 4-quadrant thresholding method for optic disk segmentation and a multi-step temporal-nasal segmenting method for optic cup segmentation based on blood vessel inpainted HSL lightness images and green images. The performance of the proposed methods was evaluated on a group of color fundus images and compared with the manual outlining results from two experts. Dice scores of detected disk and cup regions between the auto and manual results were computed and compared. Vertical CDRs were also compared among the three results. The preliminary experiment has demonstrated the robustness of the method for automatic optic disk and cup segmentation and its potential value for clinical application.
Explicitly represented polygon wall boundary model for the explicit MPS method
NASA Astrophysics Data System (ADS)
Mitsume, Naoto; Yoshimura, Shinobu; Murotani, Kohei; Yamada, Tomonori
2015-05-01
This study presents an accurate and robust boundary model, the explicitly represented polygon (ERP) wall boundary model, to treat arbitrarily shaped wall boundaries in the explicit moving particle simulation (E-MPS) method, which is a mesh-free particle method for strong form partial differential equations. The ERP model expresses wall boundaries as polygons, which are explicitly represented without using the distance function. These are derived so that for viscous fluids, and with less computational cost, they satisfy the Neumann boundary condition for the pressure and the slip/no-slip condition on the wall surface. The proposed model is verified and validated by comparing computed results with the theoretical solution, results obtained by other models, and experimental results. Two simulations with complex boundary movements are conducted to demonstrate the applicability of the E-MPS method to the ERP model.
Marzulli, F; Maguire, H C
1982-02-01
Several guinea-pig predictive test methods were evaluated by comparison of results with those obtained with human predictive tests, using ten compounds that have been used in cosmetics. The method involves the statistical analysis of the frequency with which guinea-pig tests agree with the findings of tests in humans. In addition, the frequencies of false positive and false negative predictive findings are considered and statistically analysed. The results clearly demonstrate the superiority of adjuvant tests (complete Freund's adjuvant) in determining skin sensitizers and the overall superiority of the guinea-pig maximization test in providing results similar to those obtained by human testing. A procedure is suggested for utilizing adjuvant and non-adjuvant test methods for characterizing compounds as of weak, moderate or strong sensitizing potential.
Handwritten digits recognition using HMM and PSO based on storks
NASA Astrophysics Data System (ADS)
Yan, Liao; Jia, Zhenhong; Yang, Jie; Pang, Shaoning
2010-07-01
A new method for handwritten digits recognition based on hidden markov model (HMM) and particle swarm optimization (PSO) is proposed. This method defined 24 strokes with the sense of directional, to make up for the shortage that is sensitive in choice of stating point in traditional methods, but also reduce the ambiguity caused by shakes. Make use of excellent global convergence of PSO; improving the probability of finding the optimum and avoiding local infinitesimal obviously. Experimental results demonstrate that compared with the traditional methods, the proposed method can make most of the recognition rate of handwritten digits improved.
A new ChainMail approach for real-time soft tissue simulation.
Zhang, Jinao; Zhong, Yongmin; Smith, Julian; Gu, Chengfan
2016-07-03
This paper presents a new ChainMail method for real-time soft tissue simulation. This method enables the use of different material properties for chain elements to accommodate various materials. Based on the ChainMail bounding region, a new time-saving scheme is developed to improve computational efficiency for isotropic materials. The proposed method also conserves volume and strain energy. Experimental results demonstrate that the proposed ChainMail method can not only accommodate isotropic, anisotropic and heterogeneous materials but also model incompressibility and relaxation behaviors of soft tissues. Further, the proposed method can achieve real-time computational performance.
Path Planning for Robot based on Chaotic Artificial Potential Field Method
NASA Astrophysics Data System (ADS)
Zhang, Cheng
2018-03-01
Robot path planning in unknown environments is one of the hot research topics in the field of robot control. Aiming at the shortcomings of traditional artificial potential field methods, we propose a new path planning for Robot based on chaotic artificial potential field method. The path planning adopts the potential function as the objective function and introduces the robot direction of movement as the control variables, which combines the improved artificial potential field method with chaotic optimization algorithm. Simulations have been carried out and the results demonstrate that the superior practicality and high efficiency of the proposed method.
Direct imaging of small scatterers using reduced time dependent data
NASA Astrophysics Data System (ADS)
Cakoni, Fioralba; Rezac, Jacob D.
2017-06-01
We introduce qualitative methods for locating small objects using time dependent acoustic near field waves. These methods have reduced data collection requirements compared to typical qualitative imaging techniques. In particular, we only collect scattered field data in a small region surrounding the location from which an incident field was transmitted. The new methods are partially theoretically justified and numerical simulations demonstrate their efficacy. We show that these reduced data techniques give comparable results to methods which require full multistatic data and that these time dependent methods require less scattered field data than their time harmonic analogs.
Zheng, Jinkai; Fang, Xiang; Cao, Yong; Xiao, Hang; He, Lili
2013-01-01
To develop an accurate and convenient method for monitoring the production of citrus-derived bioactive 5-demethylnobiletin from demethylation reaction of nobiletin, we compared surface enhanced Raman spectroscopy (SERS) methods with a conventional HPLC method. Our results show that both the substrate-based and solution-based SERS methods correlated with HPLC method very well. The solution method produced lower root mean square error of calibration and higher correlation coefficient than the substrate method. The solution method utilized an ‘affinity chromatography’-like procedure to separate the reactant nobiletin from the product 5-demthylnobiletin based on their different binding affinity to the silver dendrites. The substrate method was found simpler and faster to collect the SERS ‘fingerprint’ spectra of the samples as no incubation between samples and silver was needed and only trace amount of samples were required. Our results demonstrated that the SERS methods were superior to HPLC method in conveniently and rapidly characterizing and quantifying 5-demethylnobiletin production. PMID:23885986
NASA Astrophysics Data System (ADS)
Mohebbi, Akbar
2018-02-01
In this paper we propose two fast and accurate numerical methods for the solution of multidimensional space fractional Ginzburg-Landau equation (FGLE). In the presented methods, to avoid solving a nonlinear system of algebraic equations and to increase the accuracy and efficiency of method, we split the complex problem into simpler sub-problems using the split-step idea. For a homogeneous FGLE, we propose a method which has fourth-order of accuracy in time component and spectral accuracy in space variable and for nonhomogeneous one, we introduce another scheme based on the Crank-Nicolson approach which has second-order of accuracy in time variable. Due to using the Fourier spectral method for fractional Laplacian operator, the resulting schemes are fully diagonal and easy to code. Numerical results are reported in terms of accuracy, computational order and CPU time to demonstrate the accuracy and efficiency of the proposed methods and to compare the results with the analytical solutions. The results show that the present methods are accurate and require low CPU time. It is illustrated that the numerical results are in good agreement with the theoretical ones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timonen, Hilkka; Cubison, Mike; Aurela, Minna
The applicability, methods and limitations of constrained peak fitting on mass spectra of low mass resolving power ( m/Δ m 50~500) recorded with a time-of-flight aerosol chemical speciation monitor (ToF-ACSM) are explored. Calibration measurements as well as ambient data are used to exemplify the methods that should be applied to maximise data quality and assess confidence in peak-fitting results. Sensitivity analyses and basic peak fit metrics such as normalised ion separation are employed to demonstrate which peak-fitting analyses commonly performed in high-resolution aerosol mass spectrometry are appropriate to perform on spectra of this resolving power. Information on aerosol sulfate, nitrate,more » sodium chloride, methanesulfonic acid as well as semi-volatile metal species retrieved from these methods is evaluated. The constants in a commonly used formula for the estimation of the mass concentration of hydrocarbon-like organic aerosol may be refined based on peak-fitting results. Lastly, application of a recently published parameterisation for the estimation of carbon oxidation state to ToF-ACSM spectra is validated for a range of organic standards and its use demonstrated for ambient urban data.« less
Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera.
Nguyen, Thuy Tuong; Slaughter, David C; Hanson, Bradley D; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin
2015-07-28
This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images.
Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera
Nguyen, Thuy Tuong; Slaughter, David C.; Hanson, Bradley D.; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin
2015-01-01
This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images. PMID:26225982
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Waas, Anthony M.; Berdnarcyk, Brett A.; Arnold, Steven M.; Collier, Craig S.
2009-01-01
This preliminary report demonstrates the capabilities of the recently developed software implementation that links the Generalized Method of Cells to explicit finite element analysis by extending a previous development which tied the generalized method of cells to implicit finite elements. The multiscale framework, which uses explicit finite elements at the global-scale and the generalized method of cells at the microscale is detailed. This implementation is suitable for both dynamic mechanics problems and static problems exhibiting drastic and sudden changes in material properties, which often encounter convergence issues with commercial implicit solvers. Progressive failure analysis of stiffened and un-stiffened fiber-reinforced laminates subjected to normal blast pressure loads was performed and is used to demonstrate the capabilities of this framework. The focus of this report is to document the development of the software implementation; thus, no comparison between the results of the models and experimental data is drawn. However, the validity of the results are assessed qualitatively through the observation of failure paths, stress contours, and the distribution of system energies.
High-Dimensional Heteroscedastic Regression with an Application to eQTL Data Analysis
Daye, Z. John; Chen, Jinbo; Li, Hongzhe
2011-01-01
Summary We consider the problem of high-dimensional regression under non-constant error variances. Despite being a common phenomenon in biological applications, heteroscedasticity has, so far, been largely ignored in high-dimensional analysis of genomic data sets. We propose a new methodology that allows non-constant error variances for high-dimensional estimation and model selection. Our method incorporates heteroscedasticity by simultaneously modeling both the mean and variance components via a novel doubly regularized approach. Extensive Monte Carlo simulations indicate that our proposed procedure can result in better estimation and variable selection than existing methods when heteroscedasticity arises from the presence of predictors explaining error variances and outliers. Further, we demonstrate the presence of heteroscedasticity in and apply our method to an expression quantitative trait loci (eQTLs) study of 112 yeast segregants. The new procedure can automatically account for heteroscedasticity in identifying the eQTLs that are associated with gene expression variations and lead to smaller prediction errors. These results demonstrate the importance of considering heteroscedasticity in eQTL data analysis. PMID:22547833
Direct method for imaging elemental distribution profiles with long-period x-ray standing waves
NASA Astrophysics Data System (ADS)
Kohli, Vaibhav; Bedzyk, Michael J.; Fenter, Paul
2010-02-01
A model-independent Fourier-inversion method for imaging elemental profiles from multilayer and total-external reflection x-ray standing wave (XSW) data is developed for the purpose of understanding the assembly of atoms, ions, and molecules at well-defined interfaces in complex environments. The direct-method formalism is derived for the case of a long-period XSW generated by low-angle specular reflection in an attenuating overlayer medium. It is validated through comparison with simulated and experimental data to directly obtain an elemental distribution contained within the overlayer. We demonstrate this formalism by extracting the one-dimensional profile of Ti normal to the surface for a TiO2/Si/Mo trilayer deposited on a Si substrate using the TiKα fluorescence yield measured in air and under an aqueous electrolyte. The model-independent results demonstrate reduced coherent fractions for the in situ results associated with an incoherency of the x-ray beam (which are attributed to fluorescence excitation by diffusely or incoherently scattered x-rays). The uniqueness and limitations of the approach are discussed.
Study to develop improved methods to detect leakage in fluid systems, phase 2
NASA Technical Reports Server (NTRS)
Janus, J. C.; Cimerman, I.
1971-01-01
An ultrasonic contact sensor engineering prototype leak detection system was developed and its capabilities under cryogenic operations demonstrated. The results from tests indicate that the transducer performed well on liquid hydrogen plumbing, that flow and valve actuation could be monitored, and that the phase change from gaseous to liquid hydrogen could be detected by the externally mounted transducers. Tests also demonstrate the ability of the system to detect internal leaks past valve seats and to function as a flow meter. Such a system demonstrates that it is not necessary to break into welded systems to locate internal leaks.
A method of determining the refractive index of a prismatic lens.
Buckley, John G
2010-01-01
A new method of measuring lens refractive index requiring immersion in solution and measuring lens power in air and in solution is extended. Prompted by a clinical need, the new method using lens power can be extended by applying it to prismatic power as well. This article provides a theoretical basis explaining why this can be done. The prismatic power of a prism is derived from first principles. Snell's Law and geometrical optics provide the framework for demonstrating the validity of the resulting formula. The sameness in formula derived using lens power or prism is shown, both from a paraxial and non-paraxial optics perspective. The effect of varying lens material and amount of prism is considered. The prismatic method described provides a useful alternative method of determining the refractive index of any lens. In some cases, it may be the only method available. Practitioners should consider when each method will provide optimal results.
NASA Astrophysics Data System (ADS)
Zhao, Shijia; Liu, Zongwei; Wang, Yue; Zhao, Fuquan
2017-01-01
Subjectivity usually causes large fluctuations in evaluation results. Many scholars attempt to establish new mathematical methods to make evaluation results consistent with actual objective situations. An improved catastrophe progression method (ICPM) is constructed to overcome the defects of the original method. The improved method combines the merits of the principal component analysis' information coherence and the catastrophe progression method's none index weight and has the advantage of highly objective comprehensive evaluation. Through the systematic analysis of the influencing factors of the automotive industry's core technology capacity, the comprehensive evaluation model is established according to the different roles that different indices play in evaluating the overall goal with a hierarchical structure. Moreover, ICPM is developed for evaluating the automotive industry's core technology capacity for the typical seven countries in the world, which demonstrates the effectiveness of the method.
Zhang, Xianchang; Cheng, Hewei; Zuo, Zhentao; Zhou, Ke; Cong, Fei; Wang, Bo; Zhuo, Yan; Chen, Lin; Xue, Rong; Fan, Yong
2018-01-01
The amygdala plays an important role in emotional functions and its dysfunction is considered to be associated with multiple psychiatric disorders in humans. Cytoarchitectonic mapping has demonstrated that the human amygdala complex comprises several subregions. However, it's difficult to delineate boundaries of these subregions in vivo even if using state of the art high resolution structural MRI. Previous attempts to parcellate this small structure using unsupervised clustering methods based on resting state fMRI data suffered from the low spatial resolution of typical fMRI data, and it remains challenging for the unsupervised methods to define subregions of the amygdala in vivo . In this study, we developed a novel brain parcellation method to segment the human amygdala into spatially contiguous subregions based on 7T high resolution fMRI data. The parcellation was implemented using a semi-supervised spectral clustering (SSC) algorithm at an individual subject level. Under guidance of prior information derived from the Julich cytoarchitectonic atlas, our method clustered voxels of the amygdala into subregions according to similarity measures of their functional signals. As a result, three distinct amygdala subregions can be obtained in each hemisphere for every individual subject. Compared with the cytoarchitectonic atlas, our method achieved better performance in terms of subregional functional homogeneity. Validation experiments have also demonstrated that the amygdala subregions obtained by our method have distinctive, lateralized functional connectivity (FC) patterns. Our study has demonstrated that the semi-supervised brain parcellation method is a powerful tool for exploring amygdala subregional functions.
Walén test and de Hoffmann-Teller frame of interplanetary large-amplitude Alfvén waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, J. K.; Hsieh, Wen-Chieh; Lee, L. C.
2014-05-10
In this study, three methods of analysis are compared to test the Walén relation. Method 1 requires a good de Hoffmann-Teller (HT) frame. Method 2 uses three components separately to find the frame that is slightly modified from Method 1. This method is intended to improve the accuracy of the HT frame and able to demonstrate the anisotropic property of the fluctuations. The better the relation is, the closer the slope of a regression fitting the data of plasma versus Alfvén velocities is to 1. However, this criterion is based on an average HT frame, and the fitted slope doesmore » not always work for the Walén test because the HT frame can change so fast in the high-speed streams. We propose Method 3 to check the Walén relation using a sequence of data generated by taking the difference of two consecutive values of plasma and Alfvén velocities, respectively. The difference data are independent of the HT frame. We suggest that the ratio of the variances between plasma and Alfvén velocities is a better parameter to qualify the Walén relation. Four cases in two solar wind streams are studied using these three methods. Our results show that when the solar wind HT frame remains stable, all three methods can predict Alfvénic fluctuations well, but Method 3 can better predict the Walén relation when solar wind contains structures with several small streams. A simulated case also demonstrates that Method 3 is better and more robust than Methods 1 and 2. These results are important for a better understanding of Alfvénic fluctuations and turbulence in the solar wind.« less
A new stationary gridline artifact suppression method based on the 2D discrete wavelet transform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Hui, E-mail: corinna@seu.edu.cn; Key Laboratory of Computer Network and Information Integration; Centre de Recherche en Information Biomédicale sino-français, Laboratoire International Associé, Inserm, Université de Rennes 1, Rennes 35000
2015-04-15
Purpose: In digital x-ray radiography, an antiscatter grid is inserted between the patient and the image receptor to reduce scattered radiation. If the antiscatter grid is used in a stationary way, gridline artifacts will appear in the final image. In most of the gridline removal image processing methods, the useful information with spatial frequencies close to that of the gridline is usually lost or degraded. In this study, a new stationary gridline suppression method is designed to preserve more of the useful information. Methods: The method is as follows. The input image is first recursively decomposed into several smaller subimagesmore » using a multiscale 2D discrete wavelet transform. The decomposition process stops when the gridline signal is found to be greater than a threshold in one or several of these subimages using a gridline detection module. An automatic Gaussian band-stop filter is then applied to the detected subimages to remove the gridline signal. Finally, the restored image is achieved using the corresponding 2D inverse discrete wavelet transform. Results: The processed images show that the proposed method can remove the gridline signal efficiently while maintaining the image details. The spectra of a 1D Fourier transform of the processed images demonstrate that, compared with some existing gridline removal methods, the proposed method has better information preservation after the removal of the gridline artifacts. Additionally, the performance speed is relatively high. Conclusions: The experimental results demonstrate the efficiency of the proposed method. Compared with some existing gridline removal methods, the proposed method can preserve more information within an acceptable execution time.« less
Wu, Xiaoping; Adriany, Gregor; Ugurbil, Kamil; Van de Moortele, Pierre-Francois
2013-01-01
Successful implementation of homogeneous slice-selective RF excitation in the human brain at 9.4T using 16-channel parallel transmission (pTX) is demonstrated. A novel three-step pulse design method incorporating fast real-time measurement of eddy current induced B0 variations as well as correction of resulting phase errors during excitation is described. To demonstrate the utility of the proposed method, phantom and in-vivo experiments targeting a uniform excitation in an axial slice were conducted using two-spoke pTX pulses. Even with the pre-emphasis activated, eddy current induced B0 variations with peak-to-peak values greater than 4 kHz were observed on our system during the rapid switches of slice selective gradients. This large B0 variation, when not corrected, resulted in drastically degraded excitation fidelity with the coefficient of variation (CV) of the flip angle calculated for the region of interest being large (~ 12% in the phantom and ~ 35% in the brain). By comparison, excitation fidelity was effectively restored, and satisfactory flip angle uniformity was achieved when using the proposed method, with the CV value reduced to ~ 3% in the phantom and ~ 8% in the brain. Additionally, experimental results were in good agreement with the numerical predictions obtained from Bloch simulations. Slice-selective flip angle homogenization in the human brain at 9.4T using 16-channel 3D spoke pTX pulses is achievable despite of large eddy current induced excitation phase errors; correcting for the latter was critical in this success.
[Development and application of morphological analysis method in Aspergillus niger fermentation].
Tang, Wenjun; Xia, Jianye; Chu, Ju; Zhuang, Yingping; Zhang, Siliang
2015-02-01
Filamentous fungi are widely used in industrial fermentation. Particular fungal morphology acts as a critical index for a successful fermentation. To break the bottleneck of morphological analysis, we have developed a reliable method for fungal morphological analysis. By this method, we can prepare hundreds of pellet samples simultaneously and obtain quantitative morphological information at large scale quickly. This method can largely increase the accuracy and reliability of morphological analysis result. Based on that, the studies of Aspergillus niger morphology under different oxygen supply conditions and shear rate conditions were carried out. As a result, the morphological responding patterns of A. niger morphology to these conditions were quantitatively demonstrated, which laid a solid foundation for the further scale-up.
Why are Formal Methods Not Used More Widely?
NASA Technical Reports Server (NTRS)
Knight, John C.; DeJong, Colleen L.; Gibble, Matthew S.; Nakano, Luis G.
1997-01-01
Despite extensive development over many years and significant demonstrated benefits, formal methods remain poorly accepted by industrial practitioners. Many reasons have been suggested for this situation such as a claim that they extent the development cycle, that they require difficult mathematics, that inadequate tools exist, and that they are incompatible with other software packages. There is little empirical evidence that any of these reasons is valid. The research presented here addresses the question of why formal methods are not used more widely. The approach used was to develop a formal specification for a safety-critical application using several specification notations and assess the results in a comprehensive evaluation framework. The results of the experiment suggests that there remain many impediments to the routine use of formal methods.
Evaluation of laser ablation crater relief by white light micro interferometer
NASA Astrophysics Data System (ADS)
Gurov, Igor; Volkov, Mikhail; Zhukova, Ekaterina; Ivanov, Nikita; Margaryants, Nikita; Potemkin, Andrey; Samokhvalov, Andrey; Shelygina, Svetlana
2017-06-01
A multi-view scanning method is suggested to assess a complicated surface relief by white light interferometer. Peculiarities of the method are demonstrated on a special object in the form of quadrangular pyramid cavity, which is formed at measurement of micro-hardness of materials using a hardness gauge. An algorithm of the joint processing of multi-view scanning results is developed that allows recovering correct relief values. Laser ablation craters were studied experimentally, and their relief was recovered using the developed method. It is shown that the multi-view scanning reduces ambiguity when determining the local depth of the laser ablation craters micro relief. Results of experimental studies of the multi-view scanning method and data processing algorithm are presented.
NASA Astrophysics Data System (ADS)
Kim, A. A.; Klochkov, D. V.; Konyaev, M. A.; Mihaylenko, A. S.
2017-11-01
The article considers the problem of control and verification of the laser ceilometers basic performance parameters and describes an alternative method based on the use of multi-length fiber optic delay line, simulating atmospheric track. The results of the described experiment demonstrate the great potential of this method for inspection and verification procedures of laser ceilometers.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the Administrator formaldehyde concentration must be corrected to 15 percent O2, dry basis. Results of... 100 percent load. b. select the sampling port location and the number of traverse points AND Method 1... concentration at the sampling port location AND Method 3A or 3B of 40 CFR part 60, appendix A measurements to...
Prediction of explosive yield and other characteristics of liquid rocket propellant explosions
NASA Technical Reports Server (NTRS)
Farber, E. A.; Smith, J. H.; Watts, E. H.
1973-01-01
Work which has been done at the University of Florida in arriving at credible explosive yield values for liquid rocket propellants is presented. The results are based upon logical methods which have been well worked out theoretically and verified through experimental procedures. Three independent methods to predict explosive yield values for liquid rocket propellants are described. All three give the same end result even though they utilize different parameters and procedures. They are: (1) mathematical model; (2) seven chart approach; and (3) critical mass method. A brief description of the methods, how they were derived, how they were applied, and the results which they produced are given. The experimental work used to support and verify the above methods both in the laboratory and in the field with actually explosive mixtures are presented. The methods developed are used and their value demonstrated in analyzing real problems, among them the destruct system of the Saturn 5, and the early configurations of the space shuttle.
Evaluation results for intelligent transport systems (ITS) : abstract
DOT National Transportation Integrated Search
2000-11-09
This paper summarizes the methods of evaluation set out for EC-funded ITS research and demonstration projects, known as the CONVERGE validation quality process and the lessons learned from that approach. The new approach to appraisal, which is being ...
Wong, M S; Cheng, J C Y; Wong, M W; So, S F
2005-04-01
A study was conducted to compare the CAD/CAM method with the conventional manual method in fabrication of spinal orthoses for patients with adolescent idiopathic scoliosis. Ten subjects were recruited for this study. Efficiency analyses of the two methods were performed from cast filling/ digitization process to completion of cast/image rectification. The dimensional changes of the casts/ models rectified by the two cast rectification methods were also investigated. The results demonstrated that the CAD/CAM method was faster than the conventional manual method in the studied processes. The mean rectification time of the CAD/CAM method was shorter than that of the conventional manual method by 108.3 min (63.5%). This indicated that the CAD/CAM method took about 1/3 of the time of the conventional manual to finish cast rectification. In the comparison of cast/image dimensional differences between the conventional manual method and the CAD/CAM method, five major dimensions in each of the five rectified regions namely the axilla, thoracic, lumbar, abdominal and pelvic regions were involved. There were no significant dimensional differences (p < 0.05) in 19 out of the 25 studied dimensions. This study demonstrated that the CAD/CAM system could save the time in the rectification process and offer a relatively high resemblance in cast rectification as compared with the conventional manual method.
Pixel-based speckle adjustment for noise reduction in Fourier-domain OCT images
Zhang, Anqi; Xi, Jiefeng; Sun, Jitao; Li, Xingde
2017-01-01
Speckle resides in OCT signals and inevitably effects OCT image quality. In this work, we present a novel method for speckle noise reduction in Fourier-domain OCT images, which utilizes the phase information of complex OCT data. In this method, speckle area is pre-delineated pixelwise based on a phase-domain processing method and then adjusted by the results of wavelet shrinkage of the original image. Coefficient shrinkage method such as wavelet or contourlet is applied afterwards for further suppressing the speckle noise. Compared with conventional methods without speckle adjustment, the proposed method demonstrates significant improvement of image quality. PMID:28663860
Curved-line search algorithm for ab initio atomic structure relaxation
NASA Astrophysics Data System (ADS)
Chen, Zhanghui; Li, Jingbo; Li, Shushen; Wang, Lin-Wang
2017-09-01
Ab initio atomic relaxations often take large numbers of steps and long times to converge, especially when the initial atomic configurations are far from the local minimum or there are curved and narrow valleys in the multidimensional potentials. An atomic relaxation method based on on-the-flight force learning and a corresponding curved-line search algorithm is presented to accelerate this process. Results demonstrate the superior performance of this method for metal and magnetic clusters when compared with the conventional conjugate-gradient method.
[Comparison of several methods for detecting anti-erythrocyte alloantibodies].
Bencomo, A A
1990-08-01
The efficacy of different methods for anti-red cell antibodies detection was assessed, variations being found in accordance with the specificity of the alloantibodies. The usefulness of enzyme tests in anti-Rh antibody detection was demonstrated, as well as that of low ionic strength saline solutions in detecting anti-Kell, anti-Duffy and anti-Kidd antibodies. Serum precipitation with 15% polyethyleneglycol 8000 previously to indirect antiglobulin test was found the most sensitive method, providing the best results in all the antibodies studied.
NASA Technical Reports Server (NTRS)
Hamilton, H. H., II; Spall, J. R.
1986-01-01
A time-asymptotic method has been used to obtain steady-flow solutions for axisymmetric inviscid flow over several blunt bodies including spheres, paraboloids, ellipsoids, and spherically blunted cones. Comparisons with experimental data and results of other computational methods have demonstrated that accurate solutions can be obtained using this approach. The method should prove useful as an analysis tool for comparing with experimental data and for making engineering calculations for blunt reentry vehicles.
NASA Astrophysics Data System (ADS)
Hamilton, H. H., II; Spall, J. R.
1986-07-01
A time-asymptotic method has been used to obtain steady-flow solutions for axisymmetric inviscid flow over several blunt bodies including spheres, paraboloids, ellipsoids, and spherically blunted cones. Comparisons with experimental data and results of other computational methods have demonstrated that accurate solutions can be obtained using this approach. The method should prove useful as an analysis tool for comparing with experimental data and for making engineering calculations for blunt reentry vehicles.
Estimating the number of people in crowded scenes
NASA Astrophysics Data System (ADS)
Kim, Minjin; Kim, Wonjun; Kim, Changick
2011-01-01
This paper presents a method to estimate the number of people in crowded scenes without using explicit object segmentation or tracking. The proposed method consists of three steps as follows: (1) extracting space-time interest points using eigenvalues of the local spatio-temporal gradient matrix, (2) generating crowd regions based on space-time interest points, and (3) estimating the crowd density based on the multiple regression. In experimental results, the efficiency and robustness of our proposed method are demonstrated by using PETS 2009 dataset.
Solution of the Bagley Torvik equation by fractional DTM
NASA Astrophysics Data System (ADS)
Arora, Geeta; Pratiksha
2017-07-01
In this paper, fractional differential transform method(DTM) is implemented on the Bagley Torvik equation. This equation models the viscoelastic behavior of geological strata, metals, glasses etc. It explains the motion of a rigid plate immersed in a Newtonian fluid. DTM is a simple, reliable and efficient method that gives a series solution. Caputo fractional derivative is considered throughout this work. Two examples are given to demonstrate the validity and applicability of the method and comparison is made with the existing results.
Manufacturing methods of a composite cell case for a Ni-Cd battery
NASA Technical Reports Server (NTRS)
Bauer, J. L.
1979-01-01
Basic manufacturing method refinements for using graphite epoxy material for a nickel cadmium battery cell case were performed to demonstrate production feasibility. The various facets of production scale-up, i.e., process and tooling development, together with material and process control, were integrated into a comprehensive manufacturing process that assures production reproducibility and product uniformity. Test results substantiate that a battery cell case produced from graphite epoxy pre-impregnated material, utilizing the internal pressure bag fabrication method, is feasible.
A p-version finite element method for steady incompressible fluid flow and convective heat transfer
NASA Technical Reports Server (NTRS)
Winterscheidt, Daniel L.
1993-01-01
A new p-version finite element formulation for steady, incompressible fluid flow and convective heat transfer problems is presented. The steady-state residual equations are obtained by considering a limiting case of the least-squares formulation for the transient problem. The method circumvents the Babuska-Brezzi condition, permitting the use of equal-order interpolation for velocity and pressure, without requiring the use of arbitrary parameters. Numerical results are presented to demonstrate the accuracy and generality of the method.
Face recognition using slow feature analysis and contourlet transform
NASA Astrophysics Data System (ADS)
Wang, Yuehao; Peng, Lingling; Zhe, Fuchuan
2018-04-01
In this paper we propose a novel face recognition approach based on slow feature analysis (SFA) in contourlet transform domain. This method firstly use contourlet transform to decompose the face image into low frequency and high frequency part, and then takes technological advantages of slow feature analysis for facial feature extraction. We named the new method combining the slow feature analysis and contourlet transform as CT-SFA. The experimental results on international standard face database demonstrate that the new face recognition method is effective and competitive.
Keriwala, Raj D.; Clune, Jennifer K.; Rice, Todd W.; Pugh, Meredith E.; Wheeler, Arthur P.; Miller, Alison N.; Banerjee, Arna; Terhune, Kyla; Bastarache, Julie A.
2015-01-01
Rationale: Effective teamwork is fundamental to the management of medical emergencies, and yet the best method to teach teamwork skills to trainees remains unknown. Objectives: In a cohort of incoming internal medicine interns, we tested the hypothesis that expert demonstration of teamwork principles and participation in high-fidelity simulation would each result in objectively assessed teamwork behavior superior to traditional didactics. Methods: This was a randomized, controlled, parallel-group trial comparing three teamwork teaching modalities for incoming internal medicine interns. Participants in a single-day orientation at the Vanderbilt University Center for Experiential Learning and Assessment were randomized 1:1:1 to didactic, demonstration-based, or simulation-based instruction and then evaluated in their management of a simulated crisis by five independent, blinded observers using the Teamwork Behavioral Rater score. Clinical performance was assessed using the American Heart Association Advanced Cardiac Life Support algorithm and a novel “Recognize, Respond, Reassess” score. Measurements and Main Results: Participants randomized to didactics (n = 18), demonstration (n = 17), and simulation (n = 17) were similar at baseline. The primary outcome of average overall Teamwork Behavioral Rater score for those who received demonstration-based training was similar to simulation participation (4.40 ± 1.15 vs. 4.10 ± 0.95, P = 0.917) and significantly higher than didactic instruction (4.40 ± 1.15 vs. 3.10 ± 0.51, P = 0.045). Clinical performance scores were similar between the three groups and correlated only weakly with teamwork behavior (coefficient of determination [Rs2] = 0.267, P < 0.001). Conclusions: Among incoming internal medicine interns, teamwork training by expert demonstration resulted in similar teamwork behavior to participation in high-fidelity simulation and was more effective than traditional didactics. Clinical performance was largely independent of teamwork behavior and did not differ between training modalities. PMID:25730661
Lin, Tao; Sun, Huijun; Chen, Zhong; You, Rongyi; Zhong, Jianhui
2007-12-01
Diffusion weighting in MRI is commonly achieved with the pulsed-gradient spin-echo (PGSE) method. When combined with spin-warping image formation, this method often results in ghosts due to the sample's macroscopic motion. It has been shown experimentally (Kennedy and Zhong, MRM 2004;52:1-6) that these motion artifacts can be effectively eliminated by the distant dipolar field (DDF) method, which relies on the refocusing of spatially modulated transverse magnetization by the DDF within the sample itself. In this report, diffusion-weighted images (DWIs) using both DDF and PGSE methods in the presence of macroscopic sample motion were simulated. Numerical simulation results quantify the dependence of signals in DWI on several key motion parameters and demonstrate that the DDF DWIs are much less sensitive to macroscopic sample motion than the traditional PGSE DWIs. The results also show that the dipolar correlation distance (d(c)) can alter contrast in DDF DWIs. The simulated results are in good agreement with the experimental results reported previously.
Can Selforganizing Maps Accurately Predict Photometric Redshifts?
NASA Technical Reports Server (NTRS)
Way, Michael J.; Klose, Christian
2012-01-01
We present an unsupervised machine-learning approach that can be employed for estimating photometric redshifts. The proposed method is based on a vector quantization called the self-organizing-map (SOM) approach. A variety of photometrically derived input values were utilized from the Sloan Digital Sky Survey's main galaxy sample, luminous red galaxy, and quasar samples, along with the PHAT0 data set from the Photo-z Accuracy Testing project. Regression results obtained with this new approach were evaluated in terms of root-mean-square error (RMSE) to estimate the accuracy of the photometric redshift estimates. The results demonstrate competitive RMSE and outlier percentages when compared with several other popular approaches, such as artificial neural networks and Gaussian process regression. SOM RMSE results (using delta(z) = z(sub phot) - z(sub spec)) are 0.023 for the main galaxy sample, 0.027 for the luminous red galaxy sample, 0.418 for quasars, and 0.022 for PHAT0 synthetic data. The results demonstrate that there are nonunique solutions for estimating SOM RMSEs. Further research is needed in order to find more robust estimation techniques using SOMs, but the results herein are a positive indication of their capabilities when compared with other well-known methods
NASA Astrophysics Data System (ADS)
Muralidharan, Balaji; Menon, Suresh
2018-03-01
A high-order adaptive Cartesian cut-cell method, developed in the past by the authors [1] for simulation of compressible viscous flow over static embedded boundaries, is now extended for reacting flow simulations over moving interfaces. The main difficulty related to simulation of moving boundary problems using immersed boundary techniques is the loss of conservation of mass, momentum and energy during the transition of numerical grid cells from solid to fluid and vice versa. Gas phase reactions near solid boundaries can produce huge source terms to the governing equations, which if not properly treated for moving boundaries, can result in inaccuracies in numerical predictions. The small cell clustering algorithm proposed in our previous work is now extended to handle moving boundaries enforcing strict conservation. In addition, the cell clustering algorithm also preserves the smoothness of solution near moving surfaces. A second order Runge-Kutta scheme where the boundaries are allowed to change during the sub-time steps is employed. This scheme improves the time accuracy of the calculations when the body motion is driven by hydrodynamic forces. Simple one dimensional reacting and non-reacting studies of moving piston are first performed in order to demonstrate the accuracy of the proposed method. Results are then reported for flow past moving cylinders at subsonic and supersonic velocities in a viscous compressible flow and are compared with theoretical and previously available experimental data. The ability of the scheme to handle deforming boundaries and interaction of hydrodynamic forces with rigid body motion is demonstrated using different test cases. Finally, the method is applied to investigate the detonation initiation and stabilization mechanisms on a cylinder and a sphere, when they are launched into a detonable mixture. The effect of the filling pressure on the detonation stabilization mechanisms over a hyper-velocity sphere launched into a hydrogen-oxygen-argon mixture is studied and a qualitative comparison of the results with the experimental data are made. Results indicate that the current method is able to correctly reproduce the different regimes of combustion observed in the experiments. Through the various examples it is demonstrated that our method is robust and accurate for simulation of compressible viscous reacting flow problems with moving/deforming boundaries.
Chen, Jin; Roth, Robert E; Naito, Adam T; Lengerich, Eugene J; MacEachren, Alan M
2008-01-01
Background Kulldorff's spatial scan statistic and its software implementation – SaTScan – are widely used for detecting and evaluating geographic clusters. However, two issues make using the method and interpreting its results non-trivial: (1) the method lacks cartographic support for understanding the clusters in geographic context and (2) results from the method are sensitive to parameter choices related to cluster scaling (abbreviated as scaling parameters), but the system provides no direct support for making these choices. We employ both established and novel geovisual analytics methods to address these issues and to enhance the interpretation of SaTScan results. We demonstrate our geovisual analytics approach in a case study analysis of cervical cancer mortality in the U.S. Results We address the first issue by providing an interactive visual interface to support the interpretation of SaTScan results. Our research to address the second issue prompted a broader discussion about the sensitivity of SaTScan results to parameter choices. Sensitivity has two components: (1) the method can identify clusters that, while being statistically significant, have heterogeneous contents comprised of both high-risk and low-risk locations and (2) the method can identify clusters that are unstable in location and size as the spatial scan scaling parameter is varied. To investigate cluster result stability, we conducted multiple SaTScan runs with systematically selected parameters. The results, when scanning a large spatial dataset (e.g., U.S. data aggregated by county), demonstrate that no single spatial scan scaling value is known to be optimal to identify clusters that exist at different scales; instead, multiple scans that vary the parameters are necessary. We introduce a novel method of measuring and visualizing reliability that facilitates identification of homogeneous clusters that are stable across analysis scales. Finally, we propose a logical approach to proceed through the analysis of SaTScan results. Conclusion The geovisual analytics approach described in this manuscript facilitates the interpretation of spatial cluster detection methods by providing cartographic representation of SaTScan results and by providing visualization methods and tools that support selection of SaTScan parameters. Our methods distinguish between heterogeneous and homogeneous clusters and assess the stability of clusters across analytic scales. Method We analyzed the cervical cancer mortality data for the United States aggregated by county between 2000 and 2004. We ran SaTScan on the dataset fifty times with different parameter choices. Our geovisual analytics approach couples SaTScan with our visual analytic platform, allowing users to interactively explore and compare SaTScan results produced by different parameter choices. The Standardized Mortality Ratio and reliability scores are visualized for all the counties to identify stable, homogeneous clusters. We evaluated our analysis result by comparing it to that produced by other independent techniques including the Empirical Bayes Smoothing and Kafadar spatial smoother methods. The geovisual analytics approach introduced here is developed and implemented in our Java-based Visual Inquiry Toolkit. PMID:18992163
Critical study of higher order numerical methods for solving the boundary-layer equations
NASA Technical Reports Server (NTRS)
Wornom, S. F.
1978-01-01
A fourth order box method is presented for calculating numerical solutions to parabolic, partial differential equations in two variables or ordinary differential equations. The method, which is the natural extension of the second order box scheme to fourth order, was demonstrated with application to the incompressible, laminar and turbulent, boundary layer equations. The efficiency of the present method is compared with two point and three point higher order methods, namely, the Keller box scheme with Richardson extrapolation, the method of deferred corrections, a three point spline method, and a modified finite element method. For equivalent accuracy, numerical results show the present method to be more efficient than higher order methods for both laminar and turbulent flows.
A level set method for cupping artifact correction in cone-beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Shipeng; Li, Haibo; Ge, Qi
2015-08-15
Purpose: To reduce cupping artifacts and improve the contrast-to-noise ratio in cone-beam computed tomography (CBCT). Methods: A level set method is proposed to reduce cupping artifacts in the reconstructed image of CBCT. The authors derive a local intensity clustering property of the CBCT image and define a local clustering criterion function of the image intensities in a neighborhood of each point. This criterion function defines an energy in terms of the level set functions, which represent a segmentation result and the cupping artifacts. The cupping artifacts are estimated as a result of minimizing this energy. Results: The cupping artifacts inmore » CBCT are reduced by an average of 90%. The results indicate that the level set-based algorithm is practical and effective for reducing the cupping artifacts and preserving the quality of the reconstructed image. Conclusions: The proposed method focuses on the reconstructed image without requiring any additional physical equipment, is easily implemented, and provides cupping correction through a single-scan acquisition. The experimental results demonstrate that the proposed method successfully reduces the cupping artifacts.« less
NASA Astrophysics Data System (ADS)
Snyder, Jeff; Hanstock, Chris C.; Wilman, Alan H.
2009-10-01
A general in vivo magnetic resonance spectroscopy editing technique is presented to detect weakly coupled spin systems through subtraction, while preserving singlets through addition, and is applied to the specific brain metabolite γ-aminobutyric acid (GABA) at 4.7 T. The new method uses double spin echo localization (PRESS) and is based on a constant echo time difference spectroscopy approach employing subtraction of two asymmetric echo timings, which is normally only applicable to strongly coupled spin systems. By utilizing flip angle reduction of one of the two refocusing pulses in the PRESS sequence, we demonstrate that this difference method may be extended to weakly coupled systems, thereby providing a very simple yet effective editing process. The difference method is first illustrated analytically using a simple two spin weakly coupled spin system. The technique was then demonstrated for the 3.01 ppm resonance of GABA, which is obscured by the strong singlet peak of creatine in vivo. Full numerical simulations, as well as phantom and in vivo experiments were performed. The difference method used two asymmetric PRESS timings with a constant total echo time of 131 ms and a reduced 120° final pulse, providing 25% GABA yield upon subtraction compared to two short echo standard PRESS experiments. Phantom and in vivo results from human brain demonstrate efficacy of this method in agreement with numerical simulations.
Faster protein folding using enhanced conformational sampling of molecular dynamics simulation.
Kamberaj, Hiqmet
2018-05-01
In this study, we applied swarm particle-like molecular dynamics (SPMD) approach to enhance conformational sampling of replica exchange simulations. In particular, the approach showed significant improvement in sampling efficiency of conformational phase space when combined with replica exchange method (REM) in computer simulation of peptide/protein folding. First we introduce the augmented dynamical system of equations, and demonstrate the stability of the algorithm. Then, we illustrate the approach by using different fully atomistic and coarse-grained model systems, comparing them with the standard replica exchange method. In addition, we applied SPMD simulation to calculate the time correlation functions of the transitions in a two dimensional surface to demonstrate the enhancement of transition path sampling. Our results showed that folded structure can be obtained in a shorter simulation time using the new method when compared with non-augmented dynamical system. Typically, in less than 0.5 ns using replica exchange runs assuming that native folded structure is known and within simulation time scale of 40 ns in the case of blind structure prediction. Furthermore, the root mean square deviations from the reference structures were less than 2Å. To demonstrate the performance of new method, we also implemented three simulation protocols using CHARMM software. Comparisons are also performed with standard targeted molecular dynamics simulation method. Copyright © 2018 Elsevier Inc. All rights reserved.
Interfacial gauge methods for incompressible fluid dynamics
Saye, R.
2016-06-10
Designing numerical methods for incompressible fluid flow involving moving interfaces, for example, in the computational modeling of bubble dynamics, swimming organisms, or surface waves, presents challenges due to the coupling of interfacial forces with incompressibility constraints. A class of methods, denoted interfacial gauge methods, is introduced for computing solutions to the corresponding incompressible Navier-Stokes equations. These methods use a type of "gauge freedom" to reduce the numerical coupling between fluid velocity, pressure, and interface position, allowing high-order accurate numerical methods to be developed more easily. Making use of an implicit mesh discontinuous Galerkin framework, developed in tandem with this work,more » high-order results are demonstrated, including surface tension dynamics in which fluid velocity, pressure, and interface geometry are computed with fourth-order spatial accuracy in the maximum norm. Applications are demonstrated with two-phase fluid flow displaying fine-scaled capillary wave dynamics, rigid body fluid-structure interaction, and a fluid-jet free surface flow problem exhibiting vortex shedding induced by a type of Plateau-Rayleigh instability. The developed methods can be generalized to other types of interfacial flow and facilitate precise computation of complex fluid interface phenomena.« less
Tice, George; Andaloro, Bridget; White, H Kirk; Bolton, Lance; Wang, Siqun; Davis, Eugene; Wallace, Morgan
2009-01-01
In 2006, DuPont Qualicon introduced the BAX system Q7 instrument for use with its assays. To demonstrate the equivalence of the new and old instruments, a validation study was conducted using the BAX system PCR Assay for Salmonella, AOAC Official Method 2003.09, on three food types. The foods were simultaneously analyzed with the BAX system Q7 instrument and either the U.S. Food and Drug Administration Bacteriological Analytical Manual or the U.S. Department of Agriculture-Food Safety and Inspection Service Microbiology Laboratory Guidebook reference method for detecting Salmonella. Comparable performance between the BAX system and the reference methods was observed. Of the 75 paired samples analyzed, 39 samples were positive by both the BAX system and reference methods, and 36 samples were negative by both the BAX system and reference methods, demonstrating 100% correlation. Inclusivity and exclusivity for the BAX system Q7 instrument were also established by testing 50 Salmonella strains and 20 non-Salmonella isolates. All Salmonella strains returned positive results, and all non-Salmonella isolates returned a negative response.
Theory and computation of optimal low- and medium-thrust transfers
NASA Technical Reports Server (NTRS)
Chuang, C.-H.
1993-01-01
This report presents the formulation of the optimal low- and medium-thrust orbit transfer control problem and methods for numerical solution of the problem. The problem formulation is for final mass maximization and allows for second-harmonic oblateness, atmospheric drag, and three-dimensional, non-coplanar, non-aligned elliptic terminal orbits. We setup some examples to demonstrate the ability of two indirect methods to solve the resulting TPBVP's. The methods demonstrated are the multiple-point shooting method as formulated in H. J. Oberle's subroutine BOUNDSCO, and the minimizing boundary-condition method (MBCM). We find that although both methods can converge solutions, there are trade-offs to using either method. BOUNDSCO has very poor convergence for guesses that do not exhibit the correct switching structure. MBCM, however, converges for a wider range of guesses. However, BOUNDSCO's multi-point structure allows more freedom in quesses by increasing the node points as opposed to only quessing the initial state in MBCM. Finally, we note an additional drawback for BOUNDSCO: the routine does not supply information to the users routines for switching function polarity but only the location of a preset number of switching points.
FY17 Status Report on the Initial EPP Finite Element Analysis of Grade 91 Steel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messner, M. C.; Sham, T. -L.
This report describes a modification to the elastic-perfectly plastic (EPP) strain limits design method to account for cyclic softening in Gr. 91 steel. The report demonstrates that the unmodified EPP strain limits method described in current ASME code case is not conservative for materials with substantial cyclic softening behavior like Gr. 91 steel. However, the EPP strain limits method can be modified to be conservative for softening materials by using softened isochronous stress-strain curves in place of the standard curves developed from unsoftened creep experiments. The report provides softened curves derived from inelastic material simulations and factors describing the transformationmore » of unsoftened curves to a softened state. Furthermore, the report outlines a method for deriving these factors directly from creep/fatigue tests. If the material softening saturates the proposed EPP strain limits method can be further simplified, providing a methodology based on temperature-dependent softening factors that could be implemented in an ASME code case allowing the use of the EPP strain limits method with Gr. 91. Finally, the report demonstrates the conservatism of the modified method when applied to inelastic simulation results and two bar experiments.« less
Jenkinson, Garrett; Abante, Jordi; Feinberg, Andrew P; Goutsias, John
2018-03-07
DNA methylation is a stable form of epigenetic memory used by cells to control gene expression. Whole genome bisulfite sequencing (WGBS) has emerged as a gold-standard experimental technique for studying DNA methylation by producing high resolution genome-wide methylation profiles. Statistical modeling and analysis is employed to computationally extract and quantify information from these profiles in an effort to identify regions of the genome that demonstrate crucial or aberrant epigenetic behavior. However, the performance of most currently available methods for methylation analysis is hampered by their inability to directly account for statistical dependencies between neighboring methylation sites, thus ignoring significant information available in WGBS reads. We present a powerful information-theoretic approach for genome-wide modeling and analysis of WGBS data based on the 1D Ising model of statistical physics. This approach takes into account correlations in methylation by utilizing a joint probability model that encapsulates all information available in WGBS methylation reads and produces accurate results even when applied on single WGBS samples with low coverage. Using the Shannon entropy, our approach provides a rigorous quantification of methylation stochasticity in individual WGBS samples genome-wide. Furthermore, it utilizes the Jensen-Shannon distance to evaluate differences in methylation distributions between a test and a reference sample. Differential performance assessment using simulated and real human lung normal/cancer data demonstrate a clear superiority of our approach over DSS, a recently proposed method for WGBS data analysis. Critically, these results demonstrate that marginal methods become statistically invalid when correlations are present in the data. This contribution demonstrates clear benefits and the necessity of modeling joint probability distributions of methylation using the 1D Ising model of statistical physics and of quantifying methylation stochasticity using concepts from information theory. By employing this methodology, substantial improvement of DNA methylation analysis can be achieved by effectively taking into account the massive amount of statistical information available in WGBS data, which is largely ignored by existing methods.
Valizade Hasanloei, Mohammad Amin; Sheikhpour, Razieh; Sarram, Mehdi Agha; Sheikhpour, Elnaz; Sharifi, Hamdollah
2018-02-01
Quantitative structure-activity relationship (QSAR) is an effective computational technique for drug design that relates the chemical structures of compounds to their biological activities. Feature selection is an important step in QSAR based drug design to select the most relevant descriptors. One of the most popular feature selection methods for classification problems is Fisher score which aim is to minimize the within-class distance and maximize the between-class distance. In this study, the properties of Fisher criterion were extended for QSAR models to define the new distance metrics based on the continuous activity values of compounds with known activities. Then, a semi-supervised feature selection method was proposed based on the combination of Fisher and Laplacian criteria which exploits both compounds with known and unknown activities to select the relevant descriptors. To demonstrate the efficiency of the proposed semi-supervised feature selection method in selecting the relevant descriptors, we applied the method and other feature selection methods on three QSAR data sets such as serine/threonine-protein kinase PLK3 inhibitors, ROCK inhibitors and phenol compounds. The results demonstrated that the QSAR models built on the selected descriptors by the proposed semi-supervised method have better performance than other models. This indicates the efficiency of the proposed method in selecting the relevant descriptors using the compounds with known and unknown activities. The results of this study showed that the compounds with known and unknown activities can be helpful to improve the performance of the combined Fisher and Laplacian based feature selection methods.
NASA Astrophysics Data System (ADS)
Valizade Hasanloei, Mohammad Amin; Sheikhpour, Razieh; Sarram, Mehdi Agha; Sheikhpour, Elnaz; Sharifi, Hamdollah
2018-02-01
Quantitative structure-activity relationship (QSAR) is an effective computational technique for drug design that relates the chemical structures of compounds to their biological activities. Feature selection is an important step in QSAR based drug design to select the most relevant descriptors. One of the most popular feature selection methods for classification problems is Fisher score which aim is to minimize the within-class distance and maximize the between-class distance. In this study, the properties of Fisher criterion were extended for QSAR models to define the new distance metrics based on the continuous activity values of compounds with known activities. Then, a semi-supervised feature selection method was proposed based on the combination of Fisher and Laplacian criteria which exploits both compounds with known and unknown activities to select the relevant descriptors. To demonstrate the efficiency of the proposed semi-supervised feature selection method in selecting the relevant descriptors, we applied the method and other feature selection methods on three QSAR data sets such as serine/threonine-protein kinase PLK3 inhibitors, ROCK inhibitors and phenol compounds. The results demonstrated that the QSAR models built on the selected descriptors by the proposed semi-supervised method have better performance than other models. This indicates the efficiency of the proposed method in selecting the relevant descriptors using the compounds with known and unknown activities. The results of this study showed that the compounds with known and unknown activities can be helpful to improve the performance of the combined Fisher and Laplacian based feature selection methods.
Forty-five degree backscattering-mode nonlinear absorption imaging in turbid media.
Cui, Liping; Knox, Wayne H
2010-01-01
Two-color nonlinear absorption imaging has been previously demonstrated with endogenous contrast of hemoglobin and melanin in turbid media using transmission-mode detection and a dual-laser technology approach. For clinical applications, it would be generally preferable to use backscattering mode detection and a simpler single-laser technology. We demonstrate that imaging in backscattering mode in turbid media using nonlinear absorption can be obtained with as little as 1-mW average power per beam with a single laser source. Images have been achieved with a detector receiving backscattered light at a 45-deg angle relative to the incoming beams' direction. We obtain images of capillary tube phantoms with resolution as high as 20 microm and penetration depth up to 0.9 mm for a 300-microm tube at SNR approximately 1 in calibrated scattering solutions. Simulation results of the backscattering and detection process using nonimaging optics are demonstrated. A Monte Carlo-based method shows that the nonlinear signal drops exponentially as the depth increases, which agrees well with our experimental results. Simulation also shows that with our current detection method, only 2% of the signal is typically collected with a 5-mm-radius detector.
LDRD Report: Topological Design Optimization of Convolutes in Next Generation Pulsed Power Devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cyr, Eric C.; von Winckel, Gregory John; Kouri, Drew Philip
This LDRD project was developed around the ambitious goal of applying PDE-constrained opti- mization approaches to design Z-machine components whose performance is governed by elec- tromagnetic and plasma models. This report documents the results of this LDRD project. Our differentiating approach was to use topology optimization methods developed for structural design and extend them for application to electromagnetic systems pertinent to the Z-machine. To achieve this objective a suite of optimization algorithms were implemented in the ROL library part of the Trilinos framework. These methods were applied to standalone demonstration problems and the Drekar multi-physics research application. Out of thismore » exploration a new augmented Lagrangian approach to structural design problems was developed. We demonstrate that this approach has favorable mesh-independent performance. Both the final design and the algorithmic performance were independent of the size of the mesh. In addition, topology optimization formulations for the design of conducting networks were developed and demonstrated. Of note, this formulation was used to develop a design for the inner magnetically insulated transmission line on the Z-machine. The resulting electromagnetic device is compared with theoretically postulated designs.« less
[Economic effects of integrated RIS-PACS solution in the university environment].
Kröger, M; Nissen-Meyer, S; Wetekam, V; Reiser, M
1999-04-01
The goal of the current article is to demonstrate how qualitative and monetary effects resulting from an integrated RIS/PACS installation can be evaluated. First of all, the system concept of a RIS/PACS solution for a university hospital is defined and described. Based on this example, a generic method for the evaluation of qualitative and monetary effects as well as associated risks is depicted and demonstrated. To this end, qualitative analyses, investment calculations and risk analysis are employed. The sample analysis of a RIS/PACS solution specially designed for a university hospital demonstrates positive qualitative and monetary effects of the system. Under ideal conditions the payoff time of the investments is reached after 4 years of an assumed 8 years effective life of the system. Furthermore, under conservative assumptions, the risk analysis shows a probability of 0% for realising a negative net present value at the end of the payoff time period. It should be pointed out that the positive result of this sample analysis will not necessarily apply to other clinics or hospitals. However, the same methods may be used for the individual evaluation of the qualitative and monetary effects of a RIS/PACS installation in any clinic.
Methodology, Methods, and Metrics for Testing and Evaluating Augmented Cognition Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greitzer, Frank L.
The augmented cognition research community seeks cognitive neuroscience-based solutions to improve warfighter performance by applying and managing mitigation strategies to reduce workload and improve the throughput and quality of decisions. The focus of augmented cognition mitigation research is to define, demonstrate, and exploit neuroscience and behavioral measures that support inferences about the warfighter’s cognitive state that prescribe the nature and timing of mitigation. A research challenge is to develop valid evaluation methodologies, metrics and measures to assess the impact of augmented cognition mitigations. Two considerations are external validity, which is the extent to which the results apply to operational contexts;more » and internal validity, which reflects the reliability of performance measures and the conclusions based on analysis of results. The scientific rigor of the research methodology employed in conducting empirical investigations largely affects the validity of the findings. External validity requirements also compel us to demonstrate operational significance of mitigations. Thus it is important to demonstrate effectiveness of mitigations under specific conditions. This chapter reviews some cognitive science and methodological considerations in designing augmented cognition research studies and associated human performance metrics and analysis methods to assess the impact of augmented cognition mitigations.« less
Cornelissen, M; Salmon, P M; Stanton, N A; McClure, R
2015-01-01
While a safe systems approach has long been acknowledged as the underlying philosophy of contemporary road safety strategies, systemic applications are sparse. This article argues that systems-based methods from the discipline of Ergonomics have a key role to play in road transport design and evaluation. To demonstrate, the Cognitive Work Analysis framework was used to evaluate two road designs - a traditional Melbourne intersection and a cut-through design for future intersections based on road safety safe systems principles. The results demonstrate that, although the cut-through intersection appears different in layout from the traditional intersection, system constraints are not markedly different. Furthermore, the analyses demonstrated that redistribution of constraints in the cut-through intersection resulted in emergent behaviour, which was not anticipated and could prove problematic. Further, based on the lack of understanding of emergent behaviour, similar design induced problems are apparent across both intersections. Specifically, incompatibilities between infrastructure, vehicles and different road users were not dealt with by the proposed design changes. The importance of applying systems methods in the design and evaluation of road transport systems is discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fong, L. S.; Ambrose, R. F.
2017-12-01
Remote sensing is an excellent way to assess the changing condition of streams and wetlands. Several studies have measured large-scale changes in riparian condition indicators, but few have remotely applied multi-metric assessments on a finer scale to measure changes, such as those caused by restoration, in the condition of small riparian areas. We developed an aerial imagery assessment method (AIAM) that combines landscape, hydrology, and vegetation observations into one index describing overall ecological condition of non-confined streams. Verification of AIAM demonstrated that sites in good condition (as assessed on-site by the California Rapid Assessment Method) received high AIAM scores. (AIAM was not verified with poor condition sites.) Spearman rank correlation tests comparing AIAM and the field-based California Rapid Assessment Method (CRAM) results revealed that some components of the two methods were highly correlated. The application of AIAM is illustrated with time-series restoration trajectories of three southern California stream restoration projects aged 15 to 21 years. The trajectories indicate that the projects improved in condition in years following their restoration, with vegetation showing the most dynamic change over time. AIAM restoration trajectories also overlapped to different degrees with CRAM chronosequence restoration performance curves that demonstrate the hypothetical development of high-performing projects. AIAM has high potential as a remote ecological assessment method and effective tool to determine restoration trajectories. Ultimately, this tool could be used to further improve stream and wetland restoration management.
Experimental demonstration of the control of flexible structures
NASA Technical Reports Server (NTRS)
Schaechter, D. B.; Eldred, D. B.
1984-01-01
The Large Space Structure Technology Flexible Beam Experiment employs a pinned-free flexible beam to demonstrate such required methods as dynamic and adaptive control, as well as various control law design approaches and hardware requirements. An attempt is made to define the mechanization difficulties that may inhere in flexible structures. Attention is presently given to analytical work performed in support of the test facility's development, the final design's specifications, the control laws' synthesis, and experimental results obtained.
Measurement of M²-Curve for Asymmetric Beams by Self-Referencing Interferometer Wavefront Sensor.
Du, Yongzhao
2016-11-29
For asymmetric laser beams, the values of beam quality factor M x 2 and M y 2 are inconsistent if one selects a different coordinate system or measures beam quality with different experimental conditionals, even when analyzing the same beam. To overcome this non-uniqueness, a new beam quality characterization method named as M²-curve is developed. The M²-curve not only contains the beam quality factor M x 2 and M y 2 in the x -direction and y -direction, respectively; but also introduces a curve of M x α 2 versus rotation angle α of coordinate axis. Moreover, we also present a real-time measurement method to demonstrate beam propagation factor M²-curve with a modified self-referencing Mach-Zehnder interferometer based-wavefront sensor (henceforth SRI-WFS). The feasibility of the proposed method is demonstrated with the theoretical analysis and experiment in multimode beams. The experimental results showed that the proposed measurement method is simple, fast, and a single-shot measurement procedure without movable parts.
Computational aspects of helicopter trim analysis and damping levels from Floquet theory
NASA Technical Reports Server (NTRS)
Gaonkar, Gopal H.; Achar, N. S.
1992-01-01
Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.
NASA Astrophysics Data System (ADS)
Liu, Yang; Yang, Linghui; Guo, Yin; Lin, Jiarui; Cui, Pengfei; Zhu, Jigui
2018-02-01
An interferometer technique based on temporal coherence function of femtosecond pulses is demonstrated for practical distance measurement. Here, the pulse-to-pulse alignment is analyzed for large delay distance measurement. Firstly, a temporal coherence function model between two femtosecond pulses is developed in the time domain for the dispersive unbalanced Michelson interferometer. Then, according to this model, the fringes analysis and the envelope extraction process are discussed. Meanwhile, optimization methods of pulse-to-pulse alignment for practical long distance measurement are presented. The order of the curve fitting and the selection of points for envelope extraction are analyzed. Furthermore, an averaging method based on the symmetry of the coherence function is demonstrated. Finally, the performance of the proposed methods is evaluated in the absolute distance measurement of 20 μ m with path length difference of 9 m. The improvement of standard deviation in experimental results shows that these approaches have the potential for practical distance measurement.
Measurement of M2-Curve for Asymmetric Beams by Self-Referencing Interferometer Wavefront Sensor
Du, Yongzhao
2016-01-01
For asymmetric laser beams, the values of beam quality factor Mx2 and My2 are inconsistent if one selects a different coordinate system or measures beam quality with different experimental conditionals, even when analyzing the same beam. To overcome this non-uniqueness, a new beam quality characterization method named as M2-curve is developed. The M2-curve not only contains the beam quality factor Mx2 and My2 in the x-direction and y-direction, respectively; but also introduces a curve of Mxα2 versus rotation angle α of coordinate axis. Moreover, we also present a real-time measurement method to demonstrate beam propagation factor M2-curve with a modified self-referencing Mach-Zehnder interferometer based-wavefront sensor (henceforth SRI-WFS). The feasibility of the proposed method is demonstrated with the theoretical analysis and experiment in multimode beams. The experimental results showed that the proposed measurement method is simple, fast, and a single-shot measurement procedure without movable parts. PMID:27916845
NASA Astrophysics Data System (ADS)
Yasuda, Jun; Yoshizawa, Shin; Umemura, Shin-ichiro
2016-07-01
Sonodynamic treatment is a method of treating cancer using reactive oxygen species (ROS) generated by cavitation bubbles in collaboration with a sonosensitizer at a target tissue. In this treatment method, both localized ROS generation and ROS generation with high efficiency are important. In this study, a triggered high-intensity focused ultrasound (HIFU) sequence, which consists of a short, extremely high intensity pulse immediately followed by a long, moderate-intensity burst, was employed for the efficient generation of ROS. In experiments, a solution sealed in a chamber was exposed to a triggered HIFU sequence. Then, the distribution of generated ROS was observed by the luminol reaction, and the amount of generated ROS was quantified using KI method. As a result, the localized ROS generation was demonstrated by light emission from the luminol reaction. Moreover, it was demonstrated that the triggered HIFU sequence has higher efficiency of ROS generation by both the KI method and the luminol reaction emission.