Estimation of gene induction enables a relevance-based ranking of gene sets.
Bartholomé, Kilian; Kreutz, Clemens; Timmer, Jens
2009-07-01
In order to handle and interpret the vast amounts of data produced by microarray experiments, the analysis of sets of genes with a common biological functionality has been shown to be advantageous compared to single gene analyses. Some statistical methods have been proposed to analyse the differential gene expression of gene sets in microarray experiments. However, most of these methods either require threshhold values to be chosen for the analysis, or they need some reference set for the determination of significance. We present a method that estimates the number of differentially expressed genes in a gene set without requiring a threshold value for significance of genes. The method is self-contained (i.e., it does not require a reference set for comparison). In contrast to other methods which are focused on significance, our approach emphasizes the relevance of the regulation of gene sets. The presented method measures the degree of regulation of a gene set and is a useful tool to compare the induction of different gene sets and place the results of microarray experiments into the biological context. An R-package is available.
Defining Support Requirements During Conceptual Design of Reusable Launch Vehicles
NASA Technical Reports Server (NTRS)
Morris, W. D.; White, N. H.; Davis, W. T.; Ebeling, C. E.
1995-01-01
Current methods for defining the operational support requirements of new systems are data intensive and require significant design information. Methods are being developed to aid in the analysis process of defining support requirements for new launch vehicles during their conceptual design phase that work with the level of information available during this phase. These methods will provide support assessments based on the vehicle design and the operating scenarios. The results can be used both to define expected support requirements for new launch vehicle designs and to help evaluate the benefits of using new technologies. This paper describes the models, their current status, and provides examples of their use.
A Formal Approach to Requirements-Based Programming
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
No significant general-purpose method is currently available to mechanically transform system requirements into a provably equivalent model. The widespread use of such a method represents a necessary step toward high-dependability system engineering for numerous application domains. Current tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" unfilled by such tools and methods is that the formal models cannot be proven to be equivalent to the requirements. We offer a method for mechanically transforming requirements into a provably equivalent formal model that can be used as the basis for code generation and other transformations. This method is unique in offering full mathematical tractability while using notations and techniques that are well known and well trusted. Finally, we describe further application areas we are investigating for use of the approach.
Testing the Difference of Correlated Agreement Coefficients for Statistical Significance
ERIC Educational Resources Information Center
Gwet, Kilem L.
2016-01-01
This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling…
A comparative study on different methods of automatic mesh generation of human femurs.
Viceconti, M; Bellingeri, L; Cristofolini, L; Toni, A
1998-01-01
The aim of this study was to evaluate comparatively five methods for automating mesh generation (AMG) when used to mesh a human femur. The five AMG methods considered were: mapped mesh, which provides hexahedral elements through a direct mapping of the element onto the geometry; tetra mesh, which generates tetrahedral elements from a solid model of the object geometry; voxel mesh which builds cubic 8-node elements directly from CT images; and hexa mesh that automatically generated hexahedral elements from a surface definition of the femur geometry. The various methods were tested against two reference models: a simplified geometric model and a proximal femur model. The first model was useful to assess the inherent accuracy of the meshes created by the AMG methods, since an analytical solution was available for the elastic problem of the simplified geometric model. The femur model was used to test the AMG methods in a more realistic condition. The femoral geometry was derived from a reference model (the "standardized femur") and the finite element analyses predictions were compared to experimental measurements. All methods were evaluated in terms of human and computer effort needed to carry out the complete analysis, and in terms of accuracy. The comparison demonstrated that each tested method deserves attention and may be the best for specific situations. The mapped AMG method requires a significant human effort but is very accurate and it allows a tight control of the mesh structure. The tetra AMG method requires a solid model of the object to be analysed but is widely available and accurate. The hexa AMG method requires a significant computer effort but can also be used on polygonal models and is very accurate. The voxel AMG method requires a huge number of elements to reach an accuracy comparable to that of the other methods, but it does not require any pre-processing of the CT dataset to extract the geometry and in some cases may be the only viable solution.
Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2004-01-01
A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including sensor networks and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-19
... National Ambient Air Quality Standards; Indiana PSD; Indiana State Board Requirements AGENCY: Environmental... from Indiana addressing EPA's requirements for the prevention of significant deterioration (PSD... (PSD elements), or EPA-R05-OAR-2012-0988 (state board requirements), by one of the following methods: 1...
Rapid microbiological assay of serum vitamin B12 by electronic counter
Stuart, J.; Sklaroff, S. A.
1966-01-01
A new method of measuring the growth of Lactobacillus leichmannii is reported. Its adoption for the estimation of serum vitamin B12 levels shortens the incubation period required to five hours at 45°C. The method is compared statistically with a standard method of estimation, requiring incubation at 37°C., by duplicate determinations on 106 hospital patients. The significance of the apparently decreased accuracy of the new method at low serum levels is discussed, and a re-appraisal of the optimum growth temperature of Lactobacillus leichmannii suggested. PMID:5904982
Rapid microbiological assay of serum vitamin B 12 by electronic counter.
Stuart, J; Sklaroff, S A
1966-01-01
A new method of measuring the growth of Lactobacillus leichmannii is reported. Its adoption for the estimation of serum vitamin B(12) levels shortens the incubation period required to five hours at 45 degrees C. The method is compared statistically with a standard method of estimation, requiring incubation at 37 degrees C., by duplicate determinations on 106 hospital patients. The significance of the apparently decreased accuracy of the new method at low serum levels is discussed, and a re-appraisal of the optimum growth temperature of Lactobacillus leichmannii suggested.
Complying with US and European complaint handling requirements.
Donawa, M E
1997-09-01
The importance of customer complaints for providing valuable information on the use of medical devices is clearly reflected in United States (US) and European quality system requirements for handling complaints. However, there are significant differences in US and European complaint handling requirements. This article will discuss those differences and methods for ensuring compliance.
NASA Astrophysics Data System (ADS)
Alfieri, Luisa
2015-12-01
Power quality (PQ) disturbances are becoming an important issue in smart grids (SGs) due to the significant economic consequences that they can generate on sensible loads. However, SGs include several distributed energy resources (DERs) that can be interconnected to the grid with static converters, which lead to a reduction of the PQ levels. Among DERs, wind turbines and photovoltaic systems are expected to be used extensively due to the forecasted reduction in investment costs and other economic incentives. These systems can introduce significant time-varying voltage and current waveform distortions that require advanced spectral analysis methods to be used. This paper provides an application of advanced parametric methods for assessing waveform distortions in SGs with dispersed generation. In particular, the Standard International Electrotechnical Committee (IEC) method, some parametric methods (such as Prony and Estimation of Signal Parameters by Rotational Invariance Technique (ESPRIT)), and some hybrid methods are critically compared on the basis of their accuracy and the computational effort required.
A novel asynchronous access method with binary interfaces
2008-01-01
Background Traditionally synchronous access strategies require users to comply with one or more time constraints in order to communicate intent with a binary human-machine interface (e.g., mechanical, gestural or neural switches). Asynchronous access methods are preferable, but have not been used with binary interfaces in the control of devices that require more than two commands to be successfully operated. Methods We present the mathematical development and evaluation of a novel asynchronous access method that may be used to translate sporadic activations of binary interfaces into distinct outcomes for the control of devices requiring an arbitrary number of commands to be controlled. With this method, users are required to activate their interfaces only when the device under control behaves erroneously. Then, a recursive algorithm, incorporating contextual assumptions relevant to all possible outcomes, is used to obtain an informed estimate of user intention. We evaluate this method by simulating a control task requiring a series of target commands to be tracked by a model user. Results When compared to a random selection, the proposed asynchronous access method offers a significant reduction in the number of interface activations required from the user. Conclusion This novel access method offers a variety of advantages over traditionally synchronous access strategies and may be adapted to a wide variety of contexts, with primary relevance to applications involving direct object manipulation. PMID:18959797
Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.
Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a: system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the ciasses of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.
Methods to Register Models and Input/Output Parameters for Integrated Modeling
Significant resources can be required when constructing integrated modeling systems. In a typical application, components (e.g., models and databases) created by different developers are assimilated, requiring the framework’s functionality to bridge the gap between the user’s kno...
Keyboard before Head Tracking Depresses User Success in Remote Camera Control
NASA Astrophysics Data System (ADS)
Zhu, Dingyun; Gedeon, Tom; Taylor, Ken
In remote mining, operators of complex machinery have more tasks or devices to control than they have hands. For example, operating a rock breaker requires two handed joystick control to position and fire the jackhammer, leaving the camera control to either automatic control or require the operator to switch between controls. We modelled such a teleoperated setting by performing experiments using a simple physical game analogue, being a half size table soccer game with two handles. The complex camera angles of the mining application were modelled by obscuring the direct view of the play area and the use of a Pan-Tilt-Zoom (PTZ) camera. The camera control was via either a keyboard or via head tracking using two different sets of head gestures called “head motion” and “head flicking” for turning camera motion on/off. Our results show that the head motion control was able to provide a comparable performance to using a keyboard, while head flicking was significantly worse. In addition, the sequence of use of the three control methods is highly significant. It appears that use of the keyboard first depresses successful use of the head tracking methods, with significantly better results when one of the head tracking methods was used first. Analysis of the qualitative survey data collected supports that the worst (by performance) method was disliked by participants. Surprisingly, use of that worst method as the first control method significantly enhanced performance using the other two control methods.
Comparison between videotape and personalized patient education for anticoagulant therapy.
Stone, S; Holden, A; Knapic, N; Ansell, J
1989-07-01
To assess the effectiveness of videotape patient education, 22 patients were randomized to receive either videotape or personalized teaching for oral anticoagulant (warfarin) therapy. Both groups scored significantly higher on a questionnaire designed to assess knowledge gained after instruction, with no significant difference between the two groups. Videotape instruction required substantially less nursing time. A second questionnaire assessed patient satisfaction with respect to both methods, which were rated equally effective and worthwhile. Videotape teaching is an effective and well-accepted alternative form of patient education requiring significantly less personnel time.
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
A general-purpose method to mechanically transform system requirements into a probably equivalent model has yet to appeal: Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including sensor networks and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a probably equivalent implementation are valuable but not su8cient. The "gap" unfilled by such tools and methods is that their. formal models cannot be proven to be equivalent to the system requirements as originated by the customel: For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a probably equivalent formal model that can be used as the basis for code generation and other transformations.
Comparative study between EDXRF and ASTM E572 methods using two-way ANOVA
NASA Astrophysics Data System (ADS)
Krummenauer, A.; Veit, H. M.; Zoppas-Ferreira, J.
2018-03-01
Comparison with reference method is one of the necessary requirements for the validation of non-standard methods. This comparison was made using the experiment planning technique with two-way ANOVA. In ANOVA, the results obtained using the EDXRF method, to be validated, were compared with the results obtained using the ASTM E572-13 standard test method. Fisher's tests (F-test) were used to comparative study between of the elements: molybdenum, niobium, copper, nickel, manganese, chromium and vanadium. All F-tests of the elements indicate that the null hypothesis (Ho) has not been rejected. As a result, there is no significant difference between the methods compared. Therefore, according to this study, it is concluded that the EDXRF method was approved in this method comparison requirement.
Method of separating boron isotopes
Jensen, R.J.; Thorne, J.M.; Cluff, C.L.
1981-01-23
A method of boron isotope enrichment involving the isotope preferential photolysis of (2-chloroethenyl)-dichloroborane as the feed material. The photolysis can readily by achieved with CO/sub 2/ laser radiation and using fluences significantly below those required to dissociate BCl/sub 3/.
Method of separating boron isotopes
Jensen, Reed J.; Thorne, James M.; Cluff, Coran L.; Hayes, John K.
1984-01-01
A method of boron isotope enrichment involving the isotope preferential photolysis of (2-chloroethenyl)dichloroborane as the feed material. The photolysis can readily be achieved with CO.sub.2 laser radiation and using fluences significantly below those required to dissociate BCl.sub.3.
Tharyan, Prathap; George, Aneesh Thomas; Kirubakaran, Richard; Barnabas, Jabez Paul
2013-01-01
We sought to evaluate if editorial policies and the reporting quality of randomized controlled trials (RCTs) had improved since our 2004-05 survey of 151 RCTs in 65 Indian journals, and to compare reporting quality of protocols in the Clinical Trials Registry-India (CTRI). An observational study of endorsement of Consolidated Standards for the Reporting of Trials (CONSORT) and International Committee of Medical Journal Editors (ICMJE) requirements in the instructions to authors in Indian journals, and compliance with selected requirements in all RCTs published during 2007-08 vs. our previous survey and between all RCT protocols in the CTRI on August 31, 2010 and published RCTs from both surveys. Journal policies endorsing the CONSORT statement (22/67, 33%) and ICMJE requirements (35/67, 52%) remained suboptimal, and only 4 of 13 CONSORT items were reported in more than 50% of the 145 RCTs assessed. Reporting of ethical issues had improved significantly, and that of methods addressing internal validity had not improved. Adequate methods were reported significantly more frequently in 768 protocols in the CTRI, than in the 296 published trials. The CTRI template facilitates the reporting of valid methods in registered trial protocols. The suboptimal compliance with CONSORT and ICMJE requirements in RCTs published in Indian journals reduces credibility in the reliability of their results. Copyright © 2013 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Javaid, Zarrar; Unsworth, Charles P., E-mail: c.unsworth@auckland.ac.nz; Boocock, Mark G.
2016-03-15
Purpose: The aim of this work is to demonstrate a new image processing technique that can provide a “near real-time” 3D reconstruction of the articular cartilage of the human knee from MR images which is user friendly. This would serve as a point-of-care 3D visualization tool which would benefit a consultant radiologist in the visualization of the human articular cartilage. Methods: The authors introduce a novel fusion of an adaptation of the contour method known as “contour interpolation (CI)” with radial basis functions (RBFs) which they describe as “CI-RBFs.” The authors also present a spline boundary correction which further enhancesmore » volume estimation of the method. A subject cohort consisting of 17 right nonpathological knees (ten female and seven male) is assessed to validate the quality of the proposed method. The authors demonstrate how the CI-RBF method dramatically reduces the number of data points required for fitting an implicit surface to the entire cartilage, thus, significantly improving the speed of reconstruction over the comparable RBF reconstruction method of Carr. The authors compare the CI-RBF method volume estimation to a typical commercial package (3D DOCTOR), Carr’s RBF method, and a benchmark manual method for the reconstruction of the femoral, tibial, and patellar cartilages. Results: The authors demonstrate how the CI-RBF method significantly reduces the number of data points (p-value < 0.0001) required for fitting an implicit surface to the cartilage, by 48%, 31%, and 44% for the patellar, tibial, and femoral cartilages, respectively. Thus, significantly improving the speed of reconstruction (p-value < 0.0001) by 39%, 40%, and 44% for the patellar, tibial, and femoral cartilages over the comparable RBF model of Carr providing a near real-time reconstruction of 6.49, 8.88, and 9.43 min for the patellar, tibial, and femoral cartilages, respectively. In addition, it is demonstrated how the CI-RBF method matches the volume estimation of a typical commercial package (3D DOCTOR), Carr’s RBF method, and a benchmark manual method for the reconstruction of the femoral, tibial, and patellar cartilages. Furthermore, the performance of the segmentation method used for the extraction of the femoral, tibial, and patellar cartilages is assessed with a Dice similarity coefficient, sensitivity, and specificity measure providing high agreement to manual segmentation. Conclusions: The CI-RBF method provides a fast, accurate, and robust 3D model reconstruction that matches Carr’s RBF method, 3D DOCTOR, and a manual benchmark method in accuracy and significantly improves upon Carr’s RBF method in data requirement and computational speed. In addition, the visualization tool has been designed to quickly segment MR images requiring only four mouse clicks per MR image slice.« less
Archiving and Distributing Mouse Lines by Sperm Cryopreservation, IVF, and Embryo Transfer
Takahashi, Hideko; Liu, Chengyu
2012-01-01
The number of genetically modified mouse lines has been increasing exponentially in the past few decades. In order to safeguard them from accidental loss and genetic drifting, to reduce animal housing cost, and to efficiently distribute them around the world, it is important to cryopreserve these valuable genetic resources. Preimplantation-stage embryos from thousands of mouse lines have been cryopreserved during the past two to three decades. Although reliable, this method requires several hundreds of embryos, which demands a sizable breeding colony, to safely preserve each line. This requirement imposes significant delay and financial burden for the archiving effort. Sperm cryopreservation is now emerging as the leading method for storing and distributing mouse lines, largely due to the recent finding that addition of a reducing agent, monothioglycerol, into the cryoprotectant can significantly increase the in vitro fertilization (IVF) rate in many mouse strains, including the most widely used C57BL/6 strain. This method is quick, inexpensive, and requires only two breeding age male mice, but it still remains tricky and strain-dependent. A small change in experimental conditions can lead to significant variations in the outcome. In this chapter, we describe in detail our sperm cryopreservation, IVF, and oviduct transfer procedures for storing and reviving genetically modified mouse lines. PMID:20691860
Anderson, G F; Han, K C; Miller, R H; Johns, M E
1997-01-01
OBJECTIVE: To compare three methods of computing the national requirements for otolaryngologists in 1994 and 2010. DATA SOURCES: Three large HMOs, a Delphi panel, the Bureau of Health Professions (BHPr), and published sources. STUDY DESIGN: Three established methods of computing requirements for otolaryngologists were compared: managed care, demand-utilization, and adjusted needs assessment. Under the managed care model, a published method based on reviewing staffing patterns in HMOs was modified to estimate the number of otolaryngologists. We obtained from BHPr estimates of work force projections from their demand model. To estimate the adjusted needs model, we convened a Delphi panel of otolaryngologists using the methodology developed by the Graduate Medical Education National Advisory Committee (GMENAC). DATA COLLECTION/EXTRACTION METHODS: Not applicable. PRINCIPAL FINDINGS: Wide variation in the estimated number of otolaryngologists required occurred across the three methods. Within each model it was possible to alter the requirements for otolaryngologists significantly by changing one or more of the key assumptions. The managed care model has a potential to obtain the most reliable estimates because it reflects actual staffing patterns in institutions that are attempting to use physicians efficiently. CONCLUSIONS: Estimates of work force requirements can vary considerably if one or more assumptions are changed. In order for the managed care approach to be useful for actual decision making concerning the appropriate number of otolaryngologists required, additional research on the methodology used to extrapolate the results to the general population is necessary. PMID:9180613
P-Value Club: Teaching Significance Level on the Dance Floor
ERIC Educational Resources Information Center
Gray, Jennifer
2010-01-01
Courses: Beginning research methods and statistics courses, as well as advanced communication courses that require reading research articles and completing research projects involving statistics. Objective: Students will understand the difference between significant and nonsignificant statistical results based on p-value.
NASA Astrophysics Data System (ADS)
Hollingsworth, Peter Michael
The drive toward robust systems design, especially with respect to system affordability throughout the system life-cycle, has led to the development of several advanced design methods. While these methods have been extremely successful in satisfying the needs for which they have been developed, they inherently leave a critical area unaddressed. None of them fully considers the effect of requirements on the selection of solution systems. The goal of all of current modern design methodologies is to bring knowledge forward in the design process to the regions where more design freedom is available and design changes cost less. Therefore, it seems reasonable to consider the point in the design process where the greatest restrictions are placed on the final design, the point in which the system level requirements are set. Historically the requirements have been treated as something handed down from above. However, neither the customer nor the solution provider completely understood all of the options that are available in the broader requirements space. If a method were developed that provided the ability to understand the full scope of the requirements space, it would allow for a better comparison of potential solution systems with respect to both the current and potential future requirements. The key to a requirements conscious method is to treat requirements differently from the traditional approach. The method proposed herein is known as Requirements Controlled Design (RCD). By treating the requirements as a set of variables that control the behavior of the system, instead of variables that only define the response of the system, it is possible to determine a-priori what portions of the requirements space that any given system is capable of satisfying. Additionally, it should be possible to identify which systems can satisfy a given set of requirements and the locations where a small change in one or more requirements poses a significant risk to a design program. This thesis puts forth the theory and methodology to enable RCD, and details and validates a specific method called the Modified Strength Pareto Evolutionary Algorithm (MSPEA).
NASA Technical Reports Server (NTRS)
Roth, Don J.; Kautz, Harold E.; Abel, Phillip B.; Whalen, Mike F.; Hendricks, J. Lynne; Bodis, James R.
2000-01-01
Surface topography, which significantly affects the performance of many industrial components, is normally measured with diamond-tip profilometry over small areas or with optical scattering methods over larger areas. To develop air-coupled surface profilometry, the NASA Glenn Research Center at Lewis Field initiated a Space Act Agreement with Sonix, Inc., through two Glenn programs, the Advanced High Temperature Engine Materials Program (HITEMP) and COMMTECH. The work resulted in quantitative surface topography profiles obtained using only high-frequency, focused ultrasonic pulses in air. The method is nondestructive, noninvasive, and noncontact, and it does not require light-reflective surfaces. Air surface profiling may be desirable when diamond-tip or laserbased methods are impractical, such as over large areas, when a significant depth range is required, or for curved surfaces. When the configuration is optimized, the method is reasonably rapid and all the quantitative analysis facilities are online, including two- and three-dimensional visualization, extreme value filtering (for faulty data), and leveling.
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.
NASA Technical Reports Server (NTRS)
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.
A Practical Method for Identifying Significant Change Scores
ERIC Educational Resources Information Center
Cascio, Wayne F.; Kurtines, William M.
1977-01-01
A test of significance for identifying individuals who are most influenced by an experimental treatment as measured by pre-post test change score is presented. The technique requires true difference scores, the reliability of obtained differences, and their standard error of measurement. (Author/JKS)
Adaptive Stress Testing of Airborne Collision Avoidance Systems
NASA Technical Reports Server (NTRS)
Lee, Ritchie; Kochenderfer, Mykel J.; Mengshoel, Ole J.; Brat, Guillaume P.; Owen, Michael P.
2015-01-01
This paper presents a scalable method to efficiently search for the most likely state trajectory leading to an event given only a simulator of a system. Our approach uses a reinforcement learning formulation and solves it using Monte Carlo Tree Search (MCTS). The approach places very few requirements on the underlying system, requiring only that the simulator provide some basic controls, the ability to evaluate certain conditions, and a mechanism to control the stochasticity in the system. Access to the system state is not required, allowing the method to support systems with hidden state. The method is applied to stress test a prototype aircraft collision avoidance system to identify trajectories that are likely to lead to near mid-air collisions. We present results for both single and multi-threat encounters and discuss their relevance. Compared with direct Monte Carlo search, this MCTS method performs significantly better both in finding events and in maximizing their likelihood.
ABSTRACT Background and Aims. Waterborne diseases originating from bovine fecal material are a significant public health issue. Ensuring water quality requires the use of methods that can consistently identify pollution across a broad range of management practices. One practi...
Tavares, Suelene B N; Alves de Sousa, Nadja L; Manrique, Edna J C; Pinheiro de Albuquerque, Zair B; Zeferino, Luiz C; Amaral, Rita G
2008-06-25
Rapid prescreening (RPS) is an internal quality-control (IQC) method that is used both to reduce errors in the laboratory and to measure the sensitivity of routine screening (RS). Little direct comparison data are available comparing RPS with other more widely used IQC methods. The authors compared the performance of RPS, 10% random review of negative smears (R-10%), and directed rescreening of negative smears based on clinical risk criteria (RCRC) over 1 year in a community clinic setting. In total, 6,135 smears were evaluated. The sensitivity of RS alone was 71.3%. RPS detected significantly more (132 cases) false-negative (FN) cases than either R-10% (7 cases) or RCRC (32 cases). RPS significantly improved the overall sensitivity of the laboratory (71.3-92.2%; P = .001); neither R-10% nor RCRC significantly changed the sensitivity of RS. RPS was not as specific as the other methods, although nearly 68% of all abnormalities detected by RPS were verified as real. RPS of 100% of smears required the same amount of time as RCRC but required twice as much time as R-10%. The current results demonstrated that RPS is a much more effective IQC method than either R-10% or RCRC. RPS detects significantly more errors and can improve the overall sensitivity of a laboratory with either a modest increase or no increase in overall time spent on IQC. R-10% is an insensitive IQC method, and neither R-10% nor RCRC can significantly improve the overall sensitivity of a laboratory. (c) 2008 American Cancer Society.
A coupling method for a cardiovascular simulation model which includes the Kalman filter.
Hasegawa, Yuki; Shimayoshi, Takao; Amano, Akira; Matsuda, Tetsuya
2012-01-01
Multi-scale models of the cardiovascular system provide new insight that was unavailable with in vivo and in vitro experiments. For the cardiovascular system, multi-scale simulations provide a valuable perspective in analyzing the interaction of three phenomenons occurring at different spatial scales: circulatory hemodynamics, ventricular structural dynamics, and myocardial excitation-contraction. In order to simulate these interactions, multiscale cardiovascular simulation systems couple models that simulate different phenomena. However, coupling methods require a significant amount of calculation, since a system of non-linear equations must be solved for each timestep. Therefore, we proposed a coupling method which decreases the amount of calculation by using the Kalman filter. In our method, the Kalman filter calculates approximations for the solution to the system of non-linear equations at each timestep. The approximations are then used as initial values for solving the system of non-linear equations. The proposed method decreases the number of iterations required by 94.0% compared to the conventional strong coupling method. When compared with a smoothing spline predictor, the proposed method required 49.4% fewer iterations.
A review of methods for assessment of the rate of gastric emptying in the dog and cat: 1898-2002.
Wyse, C A; McLellan, J; Dickie, A M; Sutton, D G M; Preston, T; Yam, P S
2003-01-01
Gastric emptying is the process by which food is delivered to the small intestine at a rate and in a form that optimizes intestinal absorption of nutrients. The rate of gastric emptying is subject to alteration by physiological, pharmacological, and pathological conditions. Gastric emptying of solids is of greater clinical significance because disordered gastric emptying rarely is detectable in the liquid phase. Imaging techniques have the disadvantage of requiring restraint of the animal and access to expensive equipment. Radiographic methods require administration of test meals that are not similar to food. Scintigraphy is the gold standard method for assessment of gastric emptying but requires administration of a radioisotope. Magnetic resonance imaging has not yet been applied for assessment of gastric emptying in small animals. Ultrasonography is a potentially useful, but subjective, method for assessment of gastric emptying in dogs. Gastric tracer methods require insertion of gastric or intestinal cannulae and are rarely applied outside of the research laboratory. The paracetamol absorption test has been applied for assessment of liquid phase gastric emptying in the dog, but requires IV cannulation. The gastric emptying breath test is a noninvasive method for assessment of gastric emptying that has been applied in dogs and cats. This method can be carried out away from the veterinary hospital, but the effects of physiological and pathological abnormalities on the test are not known. Advances in technology will facilitate the development of reliable methods for assessment of gastric emptying in small animals.
Formal Requirements-Based Programming for Complex Systems
NASA Technical Reports Server (NTRS)
Rash, James L.; Hinchey, Michael G.; Rouff, Christopher A.; Gracanin, Denis
2005-01-01
Computer science as a field has not yet produced a general method to mechanically transform complex computer system requirements into a provably equivalent implementation. Such a method would be one major step towards dealing with complexity in computing, yet it remains the elusive holy grail of system development. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that such tools and methods leave unfilled is that the formal models cannot be proven to be equivalent to the system requirements as originated by the customer For the classes of complex systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations. While other techniques are available, this method is unique in offering full mathematical tractability while using notations and techniques that are well known and well trusted. We illustrate the application of the method to an example procedure from the Hubble Robotic Servicing Mission currently under study and preliminary formulation at NASA Goddard Space Flight Center.
DOT National Transportation Integrated Search
1981-01-01
This report describes a method for locating historic site information using a computer graphics program. If adopted for use by the Virginia Department of Highways and Transportation, this method should significantly reduce the time now required to de...
Needs Assessment in Education: More Discrepancy than Analysis.
ERIC Educational Resources Information Center
Kominski, Edward S.
Significant discrepancies between ideal and real methods of needs assessment need to be rectified. Essential principles for managing an educational assessment have been set down by recognized educators. Experts' recommendations include such requirements as using a clear definition of need (as opposed to want), precise quantifiable methods, an…
Buschmann, Henrik
2016-01-01
The continuing analysis of plant cell division will require additional protein localization studies. This is greatly aided by GFP-technology, but plant transformation and the maintenance of transgenic lines can present a significant technical bottleneck. In this chapter I describe a method for the Agrobacterium-mediated genetic transformation of tobacco BY-2 cells. The method allows for the microscopic analysis of fluorescence-tagged proteins in dividing cells in within 2 days after starting a coculture. This transient transformation procedure requires only standard laboratory equipment. It is hoped that this rapid method would aid researchers conducting live-cell localization studies in plant mitosis and cytokinesis.
Mori, Genki; Nonaka, Satoru; Oda, Ichiro; Abe, Seiichiro; Suzuki, Haruhisa; Yoshinaga, Shigetaka; Nakajima, Takeshi; Saito, Yutaka
2015-01-01
Background and study aims: Endoscopic submucosal dissection (ESD) using insulation-tipped knives (IT knives) to treat gastric lesions located on the greater curvature of the gastric body remains technically challenging because of the associated bleeding, control of which can be difficult and time consuming. To eliminate these difficulties, we developed a novel strategy which we have called the “near-side approach method” and assessed its utility. Patients and methods: We reviewed patients who underwent ESD for solitary early gastric cancer located on the greater curvature of the gastric body from January 2003 to September 2014. The technical results of ESD were compared between the group treated with the novel near-side approach method and the group treated with the conventional method. Results: This study included 238 patients with 238 lesions, 118 of which were removed using the near-side approach method and 120 of which were removed using the conventional method. The median procedure time was 92 minutes for the near-side approach method and 120 minutes for the conventional method. The procedure time was significantly shorter in the near-side approach method arm. Although, the procedure time required by an experienced endoscopist was not significantly different between the two groups (100 vs. 110 minutes), the near-side approach group showed significantly shorter procedure time for a less-experienced endoscopist (90 vs. 120 minutes). Conclusions: The near-side approach method appears to require less time to complete gastric ESD than the conventional method using IT knives for technically challenging lesions located on the greater curvature of the gastric body, especially if the procedure is performed by less-experienced endoscopists. PMID:26528496
Hierarchical matrices implemented into the boundary integral approaches for gravity field modelling
NASA Astrophysics Data System (ADS)
Čunderlík, Róbert; Vipiana, Francesca
2017-04-01
Boundary integral approaches applied for gravity field modelling have been recently developed to solve the geodetic boundary value problems numerically, or to process satellite observations, e.g. from the GOCE satellite mission. In order to obtain numerical solutions of "cm-level" accuracy, such approaches require very refined level of the disretization or resolution. This leads to enormous memory requirements that need to be reduced. An implementation of the Hierarchical Matrices (H-matrices) can significantly reduce a numerical complexity of these approaches. A main idea of the H-matrices is based on an approximation of the entire system matrix that is split into a family of submatrices. Large submatrices are stored in factorized representation, while small submatrices are stored in standard representation. This allows reducing memory requirements significantly while improving the efficiency. The poster presents our preliminary results of implementations of the H-matrices into the existing boundary integral approaches based on the boundary element method or the method of fundamental solution.
NASA Technical Reports Server (NTRS)
Hynes, Charles S.; Hardy, Gordon H.; Sherry, Lance
2007-01-01
Volume I of this report presents a new method for synthesizing hybrid systems directly from design requirements, and applies the method to design of a hybrid system for longitudinal control of transport aircraft. The resulting system satisfies general requirement for safety and effectiveness specified a priori, enabling formal validation to be achieved. Volume II contains seven appendices intended to make the report accessible to readers with backgrounds in human factors, fli ght dynamics and control. and formal logic. Major design goals are (1) system desi g n integrity based on proof of correctness at the design level, (2), significant simplification and cost reduction in system development and certification, and (3) improved operational efficiency, with significant alleviation of human-factors problems encountered by pilots in current transport aircraft. This report provides for the first time a firm technical basis for criteria governing design and certification of avionic systems for transport aircraft. It should be of primary interest to designers of next-generation avionic systems.
NASA Technical Reports Server (NTRS)
Hynes, Charles S.; Hardy, Gordon H.; Sherry, Lance
2007-01-01
Volume I of this report presents a new method for synthesizing hybrid systems directly from desi gn requirements, and applies the method to design of a hybrid system for longitudinal control of transport aircraft. The resulting system satisfies general requirement for safety and effectiveness specified a priori, enabling formal validation to be achieved. Volume II contains seven appendices intended to make the report accessible to readers with backgrounds in human factors, flight dynamics and control, and formal logic. Major design goals are (1) system design integrity based on proof of correctness at the design level, (2) significant simplification and cost reduction in system development and certification, and (3) improved operational efficiency, with significant alleviation of human-factors problems encountered by pilots in current transport aircraft. This report provides for the first time a firm technical basis for criteria governing design and certification of avionic systems for transport aircraft. It should be of primary interest to designers of next-generation avionic systems.
System and Method for Multi-Wavelength Optical Signal Detection
NASA Technical Reports Server (NTRS)
McGlone, Thomas D. (Inventor)
2017-01-01
The system and method for multi-wavelength optical signal detection enables the detection of optical signal levels significantly below those processed at the discrete circuit level by the use of mixed-signal processing methods implemented with integrated circuit technologies. The present invention is configured to detect and process small signals, which enables the reduction of the optical power required to stimulate detection networks, and lowers the required laser power to make specific measurements. The present invention provides an adaptation of active pixel networks combined with mixed-signal processing methods to provide an integer representation of the received signal as an output. The present invention also provides multi-wavelength laser detection circuits for use in various systems, such as a differential absorption light detection and ranging system.
Zhang, Guodong; Thau, Eve; Brown, Eric W; Hammack, Thomas S
2013-12-01
The current FDA Bacteriological Analytical Manual (BAM) method for the detection of Salmonella in eggs requires 2 wk to complete. The objective of this project was to improve the BAM method for the detection and isolation of Salmonella in whole shell eggs. A novel protocol, using 1,000 g of liquid eggs for direct preenrichment with 2 L of tryptic soy broth (TSB) followed by enrichment using Rappaport-Vassiliadis and Tetrathionate broths, was compared with the standard BAM method, which requires 96 h room temperature incubation of whole shell egg samples followed by preenrichment in TSB supplemented with FeSO4. Four Salmonella ser. Enteritidis (4 phage types) and one Salmonella ser. Heidelberg isolates were used in the study. Bulk inoculated pooled liquid eggs, weighing 52 or 56 kg (approximately 1,100 eggs) were used in each trial. Twenty 1,000-g test portions were withdrawn from the pooled eggs for both the alternative and the reference methods. Test portions were inoculated with Salmonella at 1 to 5 cfu/1,000 g eggs. Two replicates were performed for each isolate. In the 8 trials conducted with Salmonella ser. Enteritidis, the alternative method was significantly (P < 0.05) more productive than the reference method in 3 trials, and significantly (P < 0.05) less productive than the reference method in 1 trial. There were no significant (P < 0.05) differences between the 2 methods for the other 4 trials. For Salmonella ser. Heidelberg, combined data from 2 trials showed the alternative method was significantly (P < 0.05) more efficient than the BAM method. We have concluded that the alternative method, described herein, has the potential to replace the current BAM culture method for detection and isolation of Salmonella from shell eggs based on the following factors: 1) the alternative method is 4 d shorter than the reference method; 2) it uses regular TSB instead of the more complicated TSB supplemented with FeSO4; and 3) it was equivalent or superior to the reference method in 9 out of 10 trials for the detection of Salmonella in shell eggs.
Costs of measuring leaf area index of corn
NASA Technical Reports Server (NTRS)
Daughtry, C. S. T.; Hollinger, S. E.
1984-01-01
The magnitude of plant-to-plant variability of leaf area of corn plants selected from uniform plots was examined and four representative methods for measuring leaf area index (LAI) were evaluated. The number of plants required and the relative costs for each sampling method were calculated to detect 10, 20, and 50% differences in LAI using 0.05 and 0.01 tests of significance and a 90% probability of success (beta = 0.1). The natural variability of leaf area per corn plant was nearly 10%. Additional variability or experimental error may be introduced by the measurement technique employed and by nonuniformity within the plot. Direct measurement of leaf area with an electronic area meter had the lowest CV, required that the fewest plants be sampled, but required approximately the same amount of time as the leaf area/weight ratio method to detect comparable differences. Indirect methods based on measurements of length and width of leaves required more plants but less total time than the direct method. Unless the coefficients for converting length and width to area are verified frequently, the indirect methods may be biased. When true differences in LAI among treatments exceed 50% of mean, all four methods are equal. The method of choice depends on the resources available, the differences to be detected, and what additional information, such as leaf weight or stalk weight, is also desired.
A method to measure internal contact angle in opaque systems by magnetic resonance imaging.
Zhu, Weiqin; Tian, Ye; Gao, Xuefeng; Jiang, Lei
2013-07-23
Internal contact angle is an important parameter for internal wettability characterization. However, due to the limitation of optical imaging, methods available for contact angle measurement are only suitable for transparent or open systems. For most of the practical situations that require contact angle measurement in opaque or enclosed systems, the traditional methods are not effective. Based upon the requirement, a method suitable for contact angle measurement in nontransparent systems is developed by employing MRI technology. In the Article, the method is demonstrated by measuring internal contact angles in opaque cylindrical tubes. It proves that the method also shows great feasibility in transparent situations and opaque capillary systems. By using the method, contact angle in opaque systems could be measured successfully, which is significant in understanding the wetting behaviors in nontransparent systems and calculating interfacial parameters in enclosed systems.
Hanson, Kayla R; Pigott, Armi M; J Linklater, Andrew K
2017-10-15
OBJECTIVE To determine the incidence of blood transfusion, mortality rate, and factors associated with transfusion in dogs and cats undergoing liver lobectomy. DESIGN Retrospective case series. ANIMALS 63 client-owned dogs and 9-client owned cats that underwent liver lobectomy at a specialty veterinary practice from August 2007 through June 2015. PROCEDURES Medical records were reviewed and data extracted regarding dog and cat signalment, hematologic test results before and after surgery, surgical method, number and identity of lobes removed, concurrent surgical procedures, hemoabdomen detected during surgery, incidence of blood transfusion, and survival to hospital discharge (for calculation of mortality rate). Variables were compared between patients that did and did not require transfusion. RESULTS 11 of 63 (17%) dogs and 4 of 9 cats required a blood transfusion. Mortality rate was 8% for dogs and 22% for cats. Pre- and postoperative PCV and plasma total solids concentration were significantly lower and mortality rate significantly higher in dogs requiring transfusion than in dogs not requiring transfusion. Postoperative PCV was significantly lower in cats requiring transfusion than in cats not requiring transfusion. No significant differences in any other variable were identified between dogs and cats requiring versus not requiring transfusion. CONCLUSIONS AND CLINICAL RELEVANCE Dogs and cats undergoing liver lobectomy had a high requirement for blood transfusion, and a higher requirement for transfusion should be anticipated in dogs with perioperative anemia and cats with postoperative anemia. Veterinarians performing liver lobectomies in dogs and cats should have blood products readily available.
Method for rapidly producing microporous and mesoporous materials
Coronado, Paul R.; Poco, John F.; Hrubesh, Lawrence W.; Hopper, Robert W.
1997-01-01
An improved, rapid process is provided for making microporous and mesoporous materials, including aerogels and pre-ceramics. A gel or gel precursor is confined in a sealed vessel to prevent structural expansion of the gel during the heating process. This confinement allows the gelation and drying processes to be greatly accelerated, and significantly reduces the time required to produce a dried aerogel compared to conventional methods. Drying may be performed either by subcritical drying with a pressurized fluid to expel the liquid from the gel pores or by supercritical drying. The rates of heating and decompression are significantly higher than for conventional methods.
Justification of Estimates for Fiscal Year 1984 Submitted to Congress.
1983-01-01
sponsoring different aspects related to unique manufacturing methods than those pursued by DARPA, and duplication of effort is prevented by direct...weapons systems. Rapid and economical methods of satisfying these requirements must significantly precede weapons systems developments to prevent... methods for obtaining accurate and efficient geodetic measurements. Also, a major advanced sensor/G&G data collection capability is being urdertaken by DNA
Individual snag detection using neighborhood attribute filtered airborne lidar data
Brian M. Wing; Martin W. Ritchie; Kevin Boston; Warren B. Cohen; Michael J. Olsen
2015-01-01
The ability to estimate and monitor standing dead trees (snags) has been difficult due to their irregular and sparse distribution, often requiring intensive sampling methods to obtain statistically significant estimates. This study presents a new method for estimating and monitoring snags using neighborhood attribute filtered airborne discrete-return lidar data. The...
Current methods for screening, testing and monitoring endocrine-disrupting chemicals (EDCs) rely relatively substantially upon moderate- to long-term assays that can, in some instances, require significant numbers of animals. Recent developments in the areas of in vitro testing...
A neural network method to correct bidirectional effects in water-leaving radiance
NASA Astrophysics Data System (ADS)
Fan, Yongzhen; Li, Wei; Voss, Kenneth J.; Gatebe, Charles K.; Stamnes, Knut
2017-02-01
The standard method to convert the measured water-leaving radiances from the observation direction to the nadir direction developed by Morel and coworkers requires knowledge of the chlorophyll concentration (CHL). Also, the standard method was developed for open ocean water, which makes it unsuitable for turbid coastal waters. We introduce a neural network method to convert the water-leaving radiance (or the corresponding remote sensing reflectance) from the observation direction to the nadir direction. This method does not require any prior knowledge of the water constituents or the inherent optical properties (IOPs). This method is fast, accurate and can be easily adapted to different remote sensing instruments. Validation using NuRADS measurements in different types of water shows that this method is suitable for both open ocean and coastal waters. In open ocean or chlorophyll-dominated waters, our neural network method produces corrections similar to those of the standard method. In turbid coastal waters, especially sediment-dominated waters, a significant improvement was obtained compared to the standard method.
Strategic Leadership: A Model for Promoting, Sustaining, and Advancing Institutional Significance
ERIC Educational Resources Information Center
Scott, Kenneth E.; Johnson, Mimi
2011-01-01
This article presents the methods, materials, and manpower required to create a strategic leadership program for promoting, sustaining, and advancing institutional significance. The functionality of the program is based on the Original Case Study Design (OCSD) methodology, in which participants are given actual college issues to investigate from a…
Fast H.264/AVC FRExt intra coding using belief propagation.
Milani, Simone
2011-01-01
In the H.264/AVC FRExt coder, the coding performance of Intra coding significantly overcomes the previous still image coding standards, like JPEG2000, thanks to a massive use of spatial prediction. Unfortunately, the adoption of an extensive set of predictors induces a significant increase of the computational complexity required by the rate-distortion optimization routine. The paper presents a complexity reduction strategy that aims at reducing the computational load of the Intra coding with a small loss in the compression performance. The proposed algorithm relies on selecting a reduced set of prediction modes according to their probabilities, which are estimated adopting a belief-propagation procedure. Experimental results show that the proposed method permits saving up to 60 % of the coding time required by an exhaustive rate-distortion optimization method with a negligible loss in performance. Moreover, it permits an accurate control of the computational complexity unlike other methods where the computational complexity depends upon the coded sequence.
NASA Technical Reports Server (NTRS)
Jeong, Myeong-Jae; Hsu, N. Christina; Kwiatkowska, Ewa J.; Franz, Bryan A.; Meister, Gerhard; Salustro, Clare E.
2012-01-01
The retrieval of aerosol properties from spaceborne sensors requires highly accurate and precise radiometric measurements, thus placing stringent requirements on sensor calibration and characterization. For the Terra/Moderate Resolution Imaging Spedroradiometer (MODIS), the characteristics of the detectors of certain bands, particularly band 8 [(B8); 412 nm], have changed significantly over time, leading to increased calibration uncertainty. In this paper, we explore a possibility of utilizing a cross-calibration method developed for characterizing the Terral MODIS detectors in the ocean bands by the National Aeronautics and Space Administration Ocean Biology Processing Group to improve aerosol retrieval over bright land surfaces. We found that the Terra/MODIS B8 reflectance corrected using the cross calibration method resulted in significant improvements for the retrieved aerosol optical thickness when compared with that from the Multi-angle Imaging Spectroradiometer, Aqua/MODIS, and the Aerosol Robotic Network. The method reported in this paper is implemented for the operational processing of the Terra/MODIS Deep Blue aerosol products.
Herzog, Bastian; Lemmer, Hilde; Horn, Harald; Müller, Elisabeth
2014-02-22
Evaluation of xenobiotics biodegradation potential, shown here for benzotriazoles (corrosion inhibitors) and sulfamethoxazole (sulfonamide antibiotic) by microbial communities and/or pure cultures normally requires time intensive and money consuming LC/GC methods that are, in case of laboratory setups, not always needed. The usage of high concentrations to apply a high selective pressure on the microbial communities/pure cultures in laboratory setups, a simple UV-absorbance measurement (UV-AM) was developed and validated for screening a large number of setups, requiring almost no preparation and significantly less time and money compared to LC/GC methods. This rapid and easy to use method was evaluated by comparing its measured values to LC-UV and GC-MS/MS results. Furthermore, its application for monitoring and screening unknown activated sludge communities (ASC) and mixed pure cultures has been tested and approved to detect biodegradation of benzotriazole (BTri), 4- and 5-tolyltriazole (4-TTri, 5-TTri) as well as SMX. In laboratory setups, xenobiotics concentrations above 1.0 mg L(-1) without any enrichment or preparation could be detected after optimization of the method. As UV-AM does not require much preparatory work and can be conducted in 96 or even 384 well plate formats, the number of possible parallel setups and screening efficiency was significantly increased while analytic and laboratory costs were reduced to a minimum.
2014-01-01
Background Evaluation of xenobiotics biodegradation potential, shown here for benzotriazoles (corrosion inhibitors) and sulfamethoxazole (sulfonamide antibiotic) by microbial communities and/or pure cultures normally requires time intensive and money consuming LC/GC methods that are, in case of laboratory setups, not always needed. Results The usage of high concentrations to apply a high selective pressure on the microbial communities/pure cultures in laboratory setups, a simple UV-absorbance measurement (UV-AM) was developed and validated for screening a large number of setups, requiring almost no preparation and significantly less time and money compared to LC/GC methods. This rapid and easy to use method was evaluated by comparing its measured values to LC-UV and GC-MS/MS results. Furthermore, its application for monitoring and screening unknown activated sludge communities (ASC) and mixed pure cultures has been tested and approved to detect biodegradation of benzotriazole (BTri), 4- and 5-tolyltriazole (4-TTri, 5-TTri) as well as SMX. Conclusions In laboratory setups, xenobiotics concentrations above 1.0 mg L-1 without any enrichment or preparation could be detected after optimization of the method. As UV-AM does not require much preparatory work and can be conducted in 96 or even 384 well plate formats, the number of possible parallel setups and screening efficiency was significantly increased while analytic and laboratory costs were reduced to a minimum. PMID:24558966
Correlation energy extrapolation by many-body expansion
Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus; ...
2017-01-09
Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less
Correlation energy extrapolation by many-body expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus
Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less
Computational Issues in Damping Identification for Large Scale Problems
NASA Technical Reports Server (NTRS)
Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.
1997-01-01
Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.
Survey and Method for Determination of Trajectory Predictor Requirements
NASA Technical Reports Server (NTRS)
Rentas, Tamika L.; Green, Steven M.; Cate, Karen Tung
2009-01-01
A survey of air-traffic-management researchers, representing a broad range of automation applications, was conducted to document trajectory-predictor requirements for future decision-support systems. Results indicated that the researchers were unable to articulate a basic set of trajectory-prediction requirements for their automation concepts. Survey responses showed the need to establish a process to help developers determine the trajectory-predictor-performance requirements for their concepts. Two methods for determining trajectory-predictor requirements are introduced. A fast-time simulation method is discussed that captures the sensitivity of a concept to the performance of its trajectory-prediction capability. A characterization method is proposed to provide quicker, yet less precise results, based on analysis and simulation to characterize the trajectory-prediction errors associated with key modeling options for a specific concept. Concept developers can then identify the relative sizes of errors associated with key modeling options, and qualitatively determine which options lead to significant errors. The characterization method is demonstrated for a case study involving future airport surface traffic management automation. Of the top four sources of error, results indicated that the error associated with accelerations to and from turn speeds was unacceptable, the error associated with the turn path model was acceptable, and the error associated with taxi-speed estimation was of concern and needed a higher fidelity concept simulation to obtain a more precise result
Wilson, Kate E; Marouga, Rita; Prime, John E; Pashby, D Paul; Orange, Paul R; Crosier, Steven; Keith, Alexander B; Lathe, Richard; Mullins, John; Estibeiro, Peter; Bergling, Helene; Hawkins, Edward; Morris, Christopher M
2005-10-01
Comparative proteomic methods are rapidly being applied to many different biological systems including complex tissues. One pitfall of these methods is that in some cases, such as oncology and neuroscience, tissue complexity requires isolation of specific cell types and sample is limited. Laser microdissection (LMD) is commonly used for obtaining such samples for proteomic studies. We have combined LMD with sensitive thiol-reactive saturation dye labelling of protein samples and 2-D DIGE to identify protein changes in a test system, the isolated CA1 pyramidal neurone layer of a transgenic (Tg) rat carrying a human amyloid precursor protein transgene. Saturation dye labelling proved to be extremely sensitive with a spot map of over 5,000 proteins being readily produced from 5 mug total protein, with over 100 proteins being significantly altered at p < 0.0005. Of the proteins identified, all showed coherent changes associated with transgene expression. It was, however, difficult to identify significantly different proteins using PMF and MALDI-TOF on gels containing less than 500 mug total protein. The use of saturation dye labelling of limiting samples will therefore require the use of highly sensitive MS techniques to identify the significantly altered proteins isolated using methods such as LMD.
NASA Astrophysics Data System (ADS)
Salatino, Maria
2017-06-01
In the current submm and mm cosmology experiments the focal planes are populated by kilopixel transition edge sensors (TESes). Varying incoming power load requires frequent rebiasing of the TESes through standard current-voltage (IV) acquisition. The time required to perform IVs on such large arrays and the resulting transient heating of the bath reduces the sky observation time. We explore a bias step method that significantly reduces the time required for the rebiasing process. This exploits the detectors' responses to the injection of a small square wave signal on top of the dc bias current and knowledge of the shape of the detector transition R(T,I). This method has been tested on two detector arrays of the Atacama Cosmology Telescope (ACT). In this paper, we focus on the first step of the method, the estimate of the TES %Rn.
The Misgav Ladach method for cesarean section compared to the Pfannenstiel method.
Darj, E; Nordström, M L
1999-01-01
The aim of the study was to evaluate the outcome of two different methods of cesarean section (CS). The study was designed as a prospective, randomized, controlled trial. All CS were performed at the University Hospital in Uppsala, Sweden. Fifty women admitted to hospital for a first elective CS were consecutively included in the study. They were randomly allocated to two groups. One group was operated on by the Misgav Ladach method for CS and the other group by the Pfannenstiel method. All operations were performed by the same surgeon. Duration of operation, amount of bleeding, analgesics required, scar appearance and length of hospitalization. Operating time was significantly different between the two methods, with an average of 12.5 minutes with the Misgav Ladach method and 26 minutes with the Pfannenstiel method (p<0.001). The amount of blood loss differed significantly, with 448 ml and 608 ml respectively (p=0.017). Significantly less analgesic injections and tablets (p=0.004) were needed after the Misgav Ladach method. The Misgav Ladach method of CS has advantages over the Pfannenstiel method by being significantly quicker to perform, with a reduced amount of bleeding and diminished postoperative pain. The women were satisfied with the appearance of their scars. In this study no negative effects of the new operation technique were discovered.
Zongyi, Yin; Funian, Zou; Hao, Li; Ying, Cheng; Jialin, Zhang
2017-01-01
Rapid, efficient, and economic method for the isolation and purification of islets has been pursued by numerous islet-related researchers. In this study, we compared the advantages and disadvantages of our developed patented method with those of commonly used conventional methods (Ficoll-400, 1077, and handpicking methods). Cell viability was assayed using Trypan blue, cell purity and yield were assayed using diphenylthiocarbazone, and islet function was assayed using acridine orange/ethidium bromide staining and enzyme-linked immunosorbent assay-glucose stimulation testing 4 days after cultivation. The results showed that our islet isolation and purification method required 12 ± 3 min, which was significantly shorter than the time required in Ficoll-400, 1077, and HPU groups (34 ± 3, 41 ± 4, and 30 ± 4 min, respectively; P < 0.05). There was no significant difference in islet viability among the four groups. The islet purity, function, yield, and cost of our method were superior to those of the Ficoll-400 and 1077 methods, but inferior to the handpicking method. However, the handpicking method may cause wrist injury and visual impairment in researchers during large-scale islet isolation (>1000 islets). In summary, the MCT method is a rapid, efficient, and economic method for isolating and purifying murine islet cell clumps. This method overcomes some of the shortcomings of conventional methods, showing a relatively higher quality and yield of islets within a shorter duration at a lower cost. Therefore, the current method provides researchers with an alternative option for islet isolation and should be widely generalized. PMID:28207765
Zongyi, Yin; Funian, Zou; Hao, Li; Ying, Cheng; Jialin, Zhang; Baifeng, Li
2017-01-01
Rapid, efficient, and economic method for the isolation and purification of islets has been pursued by numerous islet-related researchers. In this study, we compared the advantages and disadvantages of our developed patented method with those of commonly used conventional methods (Ficoll-400, 1077, and handpicking methods). Cell viability was assayed using Trypan blue, cell purity and yield were assayed using diphenylthiocarbazone, and islet function was assayed using acridine orange/ethidium bromide staining and enzyme-linked immunosorbent assay-glucose stimulation testing 4 days after cultivation. The results showed that our islet isolation and purification method required 12 ± 3 min, which was significantly shorter than the time required in Ficoll-400, 1077, and HPU groups (34 ± 3, 41 ± 4, and 30 ± 4 min, respectively; P < 0.05). There was no significant difference in islet viability among the four groups. The islet purity, function, yield, and cost of our method were superior to those of the Ficoll-400 and 1077 methods, but inferior to the handpicking method. However, the handpicking method may cause wrist injury and visual impairment in researchers during large-scale islet isolation (>1000 islets). In summary, the MCT method is a rapid, efficient, and economic method for isolating and purifying murine islet cell clumps. This method overcomes some of the shortcomings of conventional methods, showing a relatively higher quality and yield of islets within a shorter duration at a lower cost. Therefore, the current method provides researchers with an alternative option for islet isolation and should be widely generalized.
Raman, E Prabhu; Lakkaraju, Sirish Kaushik; Denny, Rajiah Aldrin; MacKerell, Alexander D
2017-06-05
Accurate and rapid estimation of relative binding affinities of ligand-protein complexes is a requirement of computational methods for their effective use in rational ligand design. Of the approaches commonly used, free energy perturbation (FEP) methods are considered one of the most accurate, although they require significant computational resources. Accordingly, it is desirable to have alternative methods of similar accuracy but greater computational efficiency to facilitate ligand design. In the present study relative free energies of binding are estimated for one or two non-hydrogen atom changes in compounds targeting the proteins ACK1 and p38 MAP kinase using three methods. The methods include standard FEP, single-step free energy perturbation (SSFEP) and the site-identification by ligand competitive saturation (SILCS) ligand grid free energy (LGFE) approach. Results show the SSFEP and SILCS LGFE methods to be competitive with or better than the FEP results for the studied systems, with SILCS LGFE giving the best agreement with experimental results. This is supported by additional comparisons with published FEP data on p38 MAP kinase inhibitors. While both the SSFEP and SILCS LGFE approaches require a significant upfront computational investment, they offer a 1000-fold computational savings over FEP for calculating the relative affinities of ligand modifications once those pre-computations are complete. An illustrative example of the potential application of these methods in the context of screening large numbers of transformations is presented. Thus, the SSFEP and SILCS LGFE approaches represent viable alternatives for actively driving ligand design during drug discovery and development. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Pressure balance cross-calibration method using a pressure transducer as transfer standard
Olson, D; Driver, R. G.; Yang, Y
2016-01-01
Piston gauges or pressure balances are widely used to realize the SI unit of pressure, the pascal, and to calibrate pressure sensing devices. However, their calibration is time consuming and requires a lot of technical expertise. In this paper, we propose an alternate method of performing a piston gauge cross calibration that incorporates a pressure transducer as an immediate in-situ transfer standard. For a sufficiently linear transducer, the requirement to exactly balance the weights on the two pressure gauges under consideration is greatly relaxed. Our results indicate that this method can be employed without a significant increase in measurement uncertainty. Indeed, in the test case explored here, our results agreed with the traditional method within standard uncertainty, which was less than 6 parts per million. PMID:28303167
Integrals for IBS and beam cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burov, A.; /Fermilab
Simulation of beam cooling usually requires performing certain integral transformations every time step or so, which is a significant burden on the CPU. Examples are the dispersion integrals (Hilbert transforms) in the stochastic cooling, wake fields and IBS integrals. An original method is suggested for fast and sufficiently accurate computation of the integrals. This method is applied for the dispersion integral. Some methodical aspects of the IBS analysis are discussed.
Integrals for IBS and Beam Cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burov, A.
Simulation of beam cooling usually requires performing certain integral transformations every time step or so, which is a significant burden on the CPU. Examples are the dispersion integrals (Hilbert transforms) in the stochastic cooling, wake fields and IBS integrals. An original method is suggested for fast and sufficiently accurate computation of the integrals. This method is applied for the dispersion integral. Some methodical aspects of the IBS analysis are discussed.
Measuring larval nematode contamination on cattle pastures: Comparing two herbage sampling methods.
Verschave, S H; Levecke, B; Duchateau, L; Vercruysse, J; Charlier, J
2015-06-15
Assessing levels of pasture larval contamination is frequently used to study the population dynamics of the free-living stages of parasitic nematodes of livestock. Direct quantification of infective larvae (L3) on herbage is the most applied method to measure pasture larval contamination. However, herbage collection remains labour intensive and there is a lack of studies addressing the variation induced by the sampling method and the required sample size. The aim of this study was (1) to compare two different sampling methods in terms of pasture larval count results and time required to sample, (2) to assess the amount of variation in larval counts at the level of sample plot, pasture and season, respectively and (3) to calculate the required sample size to assess pasture larval contamination with a predefined precision using random plots across pasture. Eight young stock pastures of different commercial dairy herds were sampled in three consecutive seasons during the grazing season (spring, summer and autumn). On each pasture, herbage samples were collected through both a double-crossed W-transect with samples taken every 10 steps (method 1) and four random located plots of 0.16 m(2) with collection of all herbage within the plot (method 2). The average (± standard deviation (SD)) pasture larval contamination using sampling methods 1 and 2 was 325 (± 479) and 305 (± 444)L3/kg dry herbage (DH), respectively. Large discrepancies in pasture larval counts of the same pasture and season were often seen between methods, but no significant difference (P = 0.38) in larval counts between methods was found. Less time was required to collect samples with method 2. This difference in collection time between methods was most pronounced for pastures with a surface area larger than 1 ha. The variation in pasture larval counts from samples generated by random plot sampling was mainly due to the repeated measurements on the same pasture in the same season (residual variance component = 6.2), rather than due to pasture (variance component = 0.55) or season (variance component = 0.15). Using the observed distribution of L3, the required sample size (i.e. number of plots per pasture) for sampling a pasture through random plots with a particular precision was simulated. A higher relative precision was acquired when estimating PLC on pastures with a high larval contamination and a low level of aggregation compared to pastures with a low larval contamination when the same sample size was applied. In the future, herbage sampling through random plots across pasture (method 2) seems a promising method to develop further as no significant difference in counts between the methods was found and this method was less time consuming. Copyright © 2015 Elsevier B.V. All rights reserved.
Gamma-ray Full Spectrum Analysis for Environmental Radioactivity by HPGe Detector
NASA Astrophysics Data System (ADS)
Jeong, Meeyoung; Lee, Kyeong Beom; Kim, Kyeong Ja; Lee, Min-Kie; Han, Ju-Bong
2014-12-01
Odyssey, one of the NASA¡¯s Mars exploration program and SELENE (Kaguya), a Japanese lunar orbiting spacecraft have a payload of Gamma-Ray Spectrometer (GRS) for analyzing radioactive chemical elements of the atmosphere and the surface. In these days, gamma-ray spectroscopy with a High-Purity Germanium (HPGe) detector has been widely used for the activity measurements of natural radionuclides contained in the soil of the Earth. The energy spectra obtained by the HPGe detectors have been generally analyzed by means of the Window Analysis (WA) method. In this method, activity concentrations are determined by using the net counts of energy window around individual peaks. Meanwhile, an alternative method, the so-called Full Spectrum Analysis (FSA) method uses count numbers not only from full-absorption peaks but from the contributions of Compton scattering due to gamma-rays. Consequently, while it takes a substantial time to obtain a statistically significant result in the WA method, the FSA method requires a much shorter time to reach the same level of the statistical significance. This study shows the validation results of FSA method. We have compared the concentration of radioactivity of 40K, 232Th and 238U in the soil measured by the WA method and the FSA method, respectively. The gamma-ray spectrum of reference materials (RGU and RGTh, KCl) and soil samples were measured by the 120% HPGe detector with cosmic muon veto detector. According to the comparison result of activity concentrations between the FSA and the WA, we could conclude that FSA method is validated against the WA method. This study implies that the FSA method can be used in a harsh measurement environment, such as the gamma-ray measurement in the Moon, in which the level of statistical significance is usually required in a much shorter data acquisition time than the WA method.
Method to Identify Deep Cases Based on Relationships between Nouns, Verbs, and Particles
ERIC Educational Resources Information Center
Ide, Daisuke; Kimura, Masaomi
2016-01-01
Deep cases representing the significant meaning of nouns in sentences play a crucial role in semantic analysis. However, a case tends to be manually identified because it requires understanding the meaning and relationships of words. To address this problem, we propose a method to predict deep cases by analyzing the relationship between nouns,…
Method for rapidly producing microporous and mesoporous materials
Coronado, P.R.; Poco, J.F.; Hrubesh, L.W.; Hopper, R.W.
1997-11-11
An improved, rapid process is provided for making microporous and mesoporous materials, including aerogels and pre-ceramics. A gel or gel precursor is confined in a sealed vessel to prevent structural expansion of the gel during the heating process. This confinement allows the gelation and drying processes to be greatly accelerated, and significantly reduces the time required to produce a dried aerogel compared to conventional methods. Drying may be performed either by subcritical drying with a pressurized fluid to expel the liquid from the gel pores or by supercritical drying. The rates of heating and decompression are significantly higher than for conventional methods. 3 figs.
[Efficacy of the keyword mnemonic method in adults].
Campos, Alfredo; Pérez-Fabello, María José; Camino, Estefanía
2010-11-01
Two experiments were used to assess the efficacy of the keyword mnemonic method in adults. In Experiment 1, immediate and delayed recall (at a one-day interval) were assessed by comparing the results obtained by a group of adults using the keyword mnemonic method in contrast to a group using the repetition method. The mean age of the sample under study was 59.35 years. Subjects were required to learn a list of 16 words translated from Latin into Spanish. Participants who used keyword mnemonics that had been devised by other experimental participants of the same characteristics, obtained significantly higher immediate and delayed recall scores than participants in the repetition method. In Experiment 2, other participants had to learn a list of 24 Latin words translated into Spanish by using the keyword mnemonic method reinforced with pictures. Immediate and delayed recall were significantly greater in the keyword mnemonic method group than in the repetition method group.
Statistical Attitude Determination
NASA Technical Reports Server (NTRS)
Markley, F. Landis
2010-01-01
All spacecraft require attitude determination at some level of accuracy. This can be a very coarse requirement of tens of degrees, in order to point solar arrays at the sun, or a very fine requirement in the milliarcsecond range, as required by Hubble Space Telescope. A toolbox of attitude determination methods, applicable across this wide range, has been developed over the years. There have been many advances in the thirty years since the publication of Reference, but the fundamentals remain the same. One significant change is that onboard attitude determination has largely superseded ground-based attitude determination, due to the greatly increased power of onboard computers. The availability of relatively inexpensive radiation-hardened microprocessors has led to the development of "smart" sensors, with autonomous star trackers being the first spacecraft application. Another new development is attitude determination using interferometry of radio signals from the Global Positioning System (GPS) constellation. This article reviews both the classic material and these newer developments at approximately the level of, with emphasis on. methods suitable for use onboard a spacecraft. We discuss both "single frame" methods that are based on measurements taken at a single point in time, and sequential methods that use information about spacecraft dynamics to combine the information from a time series of measurements.
Wu, Jian; Huang, Su-Qin; Chen, Qing-Lian
2013-01-01
Purpose The purpose of this study was to investigate the influence of chronic virus-related liver disease severity on propofol requirements. Materials and Methods In this study, 48 male patients with chronic hepatitis B infection were divided into three groups according to Child-Turcotte-Pugh classification of liver function (groups A, B, and C with mild, moderate and severe liver disease, respectively). After intubation, propofol concentration was adjusted by ±0.3 µg/mL increments to maintain bispectral index in the range of 40-60. Target propofol concentrations at anesthesia initiation, pre-intubation and pre-incision were recorded. Results The initial concentration used in group C was significantly lower than that used in group A or B (p<0.05), whereas no difference was observed between groups A and B. At pre-intubation, the actual required concentration of propofol increased significantly (3.2 µg/mL) in group A (p<0.05), which lead to significant differences between the groups (p<0.05). At pre-incision, the requirements for propofol decreased significantly in both groups A and B (3.0 µg/mL and 2.7 µg/mL, respectively) compared with those at pre-intubation (p<0.05), and were significantly different for all three groups (p<0.05), with group C demonstrating the lowest requirement (2.2 µg/mL). The required concentrations of propofol at pre-incision were similar to those at induction. Conclusion In this study, propofol requirements administered by target-controlled infusion to maintain similar depths of hypnosis were shown to depend on the severity of chronic virus-related liver dysfunction. In other words, patients with the most severe liver dysfunction required the least amount of propofol. PMID:23225825
Comparing data collected by computerized and written surveys for adolescence health research.
Wu, Ying; Newfield, Susan A
2007-01-01
This study assessed whether data-collection formats, computerized versus paper-and-pencil, affect response patterns and descriptive statistics for adolescent health assessment surveys. Youth were assessed as part of a health risk reduction program. Baseline data from 1131 youth were analyzed. Participants completed the questionnaire either by computer (n = 390) or by paper-and-pencil (n = 741). The rate of returned surveys meeting inclusion requirements was 90.6% and did not differ by methods. However, the computerized method resulted in significantly less incompleteness but more identical responses. Multiple regression indicated that the survey methods did not contribute to problematic responses. The two survey methods yielded similar scale internal reliability and descriptive statistics for behavioral and psychological outcomes, although the computerized method elicited higher reports of some risk items such as carrying a knife, beating up a person, selling drugs, and delivering drugs. Overall, the survey method did not produce a significant difference in outcomes. This provides support for program personnel selecting survey methods based on study goals with confidence that the method of administration will not have a significant impact on the outcome.
RADIANCE PROCESS EVALUATION FOR PARTICLE REMOVAL
The microelectronics industry (wafer, flat panel displays, photomasks, and storage media) is transitioning to higher device densities and larger substrate formats. These changes will challenge standard cleaning methods and will require significant increases to the fabricator inf...
Formal methods demonstration project for space applications
NASA Technical Reports Server (NTRS)
Divito, Ben L.
1995-01-01
The Space Shuttle program is cooperating in a pilot project to apply formal methods to live requirements analysis activities. As one of the larger ongoing shuttle Change Requests (CR's), the Global Positioning System (GPS) CR involves a significant upgrade to the Shuttle's navigation capability. Shuttles are to be outfitted with GPS receivers and the primary avionics software will be enhanced to accept GPS-provided positions and integrate them into navigation calculations. Prior to implementing the CR, requirements analysts at Loral Space Information Systems, the Shuttle software contractor, must scrutinize the CR to identify and resolve any requirements issues. We describe an ongoing task of the Formal Methods Demonstration Project for Space Applications whose goal is to find an effective way to use formal methods in the GPS CR requirements analysis phase. This phase is currently under way and a small team from NASA Langley, ViGYAN Inc. and Loral is now engaged in this task. Background on the GPS CR is provided and an overview of the hardware/software architecture is presented. We outline the approach being taken to formalize the requirements, only a subset of which is being attempted. The approach features the use of the PVS specification language to model 'principal functions', which are major units of Shuttle software. Conventional state machine techniques form the basis of our approach. Given this background, we present interim results based on a snapshot of work in progress. Samples of requirements specifications rendered in PVS are offered to illustration. We walk through a specification sketch for the principal function known as GPS Receiver State processing. Results to date are summarized and feedback from Loral requirements analysts is highlighted. Preliminary data is shown comparing issues detected by the formal methods team versus those detected using existing requirements analysis methods. We conclude by discussing our plan to complete the remaining activities of this task.
Variable Structure PID Control to Prevent Integrator Windup
NASA Technical Reports Server (NTRS)
Hall, C. E.; Hodel, A. S.; Hung, J. Y.
1999-01-01
PID controllers are frequently used to control systems requiring zero steady-state error while maintaining requirements for settling time and robustness (gain/phase margins). PID controllers suffer significant loss of performance due to short-term integrator wind-up when used in systems with actuator saturation. We examine several existing and proposed methods for the prevention of integrator wind-up in both continuous and discrete time implementations.
Power Generation by Harvesting Ambient Energy with a Micro-Electromagnetic Generator
2009-03-01
more applicable at the micro scale are also being investigated including piezoelectric and electrostatics. Solar energy harvesting is a proven method. It...with IC circuitry. 6.2.7 Piezoelectric Research. In Chapter 2, energy harvesting through the use of piezoelectric materials was briefly discussed. A... piezoelectric harvesters require minimal movement for power generation, whereas an electromagnet generator generally requires significant mechanical motion in
Parveen, Seema; Kaur, Simleen; David, Selwyn A Wilson; Kenney, James L; McCormick, William M; Gupta, Rajesh K
2011-10-19
Most biological products, including vaccines, administered by the parenteral route are required to be tested for sterility at the final container and also at various stages during manufacture. The sterility testing method described in the Code of Federal Regulations (21 CFR 610.12) and the United States Pharmacopoeia (USP, Chapter <71>) is based on the observation of turbidity in liquid culture media due to growth of potential contaminants. We evaluated rapid microbiological methods (RMM) based on detection of growth 1) by adenosine triphosphate (ATP) bioluminescence technology (Rapid Milliflex(®) Detection System [RMDS]), and 2) by CO(2) monitoring technologies (BacT/Alert and the BACTEC systems), as alternate sterility methods. Microorganisms representing Gram negative, Gram positive, aerobic, anaerobic, spore forming, slow growing bacteria, yeast, and fungi were prepared in aliquots of Fluid A or a biological matrix (including inactivated influenza vaccines) to contain approximately 0.1, 1, 10 and 100 colony forming units (CFU) in an inoculum of 10 ml. These preparations were inoculated to the specific media required for the various methods: 1) fluid thioglycollate medium (FTM) and tryptic soy broth (TSB) of the compendial sterility method (both membrane filtration and direct inoculation); 2) tryptic soy agar (TSA), Sabouraud dextrose agar (SDA) and Schaedler blood agar (SBA) of the RMDS; 3) iAST and iNST media of the BacT/Alert system and 4) Standard 10 Aerobic/F and Standard Anaerobic/F media of the BACTEC system. RMDS was significantly more sensitive in detecting various microorganisms at 0.1CFU than the compendial methods (p<0.05), whereas the compendial membrane filtration method was significantly more sensitive than the BACTEC and BacT/Alert methods (p<0.05). RMDS detected all microorganisms significantly faster than the compendial method (p<0.05). BacT/Alert and BACTEC methods detected most microorganisms significantly faster than the compendial method (p<0.05), but took almost the same time to detect the slow growing microorganism P. acnes, compared to the compendial method. RMDS using SBA detected all test microorganisms in the presence of a matrix containing preservative 0.01% thimerosal, whereas the BacT/Alert and BACTEC systems did not consistently detect all the test microorganisms in the presence of 0.01% thimerosal. RMDS was compatible with inactivated influenza vaccines and aluminum phosphate or aluminum hydroxide adjuvants at up to 8 mg/ml without any interference in bioluminescence. RMDS was shown to be acceptable as an alternate sterility method taking 5 days as compared to the 14 days required of the compendial method. Isolation of microorganisms from the RMDS was accomplished by re-incubation of membranes with fresh SBA medium and microbial identification was confirmed using the MicroSEQ Identification System. BacT/Alert and BACTEC systems may be applicable as alternate methods to the compendial direct inoculation sterility method for products that do not contain preservatives or anti-microbial agents. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Lobb, Dan
2017-11-01
One of the most significant problems for space-based spectro-radiometer systems, observing Earth from space in the solar spectral band (UV through short-wave IR), is in achievement of the required absolute radiometric accuracy. Classical methods, for example using one or more sun-illuminated diffusers as reflectance standards, do not generally provide methods for monitoring degradation of the in-flight reference after pre-flight characterisation. Ratioing methods have been proposed that provide monitoring of degradation of solar attenuators in flight, thus in principle allowing much higher confidence in absolute response calibration. Two example methods are described. It is shown that systems can be designed for relatively low size and without significant additions to the complexity of flight hardware.
NASA Astrophysics Data System (ADS)
Kuschenerus, Mieke; Cullen, Robert
2016-08-01
To ensure reliability and precision of wave height estimates for future satellite altimetry missions such as Sentinel 6, reliable parameter retrieval algorithms that can extract significant wave heights up to 20 m have to be established. The retrieved parameters, i.e. the retrieval methods need to be validated extensively on a wide range of possible significant wave heights. Although current missions require wave height retrievals up to 20 m, there is little evidence of systematic validation of parameter retrieval methods for sea states with wave heights above 10 m. This paper provides a definition of a set of simulated sea states with significant wave height up to 20 m, that allow simulation of radar altimeter response echoes for extreme sea states in SAR and low resolution mode. The simulated radar responses are used to derive significant wave height estimates, which can be compared with the initial models, allowing precision estimations of the applied parameter retrieval methods. Thus we establish a validation method for significant wave height retrieval for sea states causing high significant wave heights, to allow improved understanding and planning of future satellite altimetry mission validation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelt, Daniël M.; Gürsoy, Dogˇa; Palenstijn, Willem Jan
2016-04-28
The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it ismore » shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy's standard reconstruction method.« less
Method and apparatus for measuring irradiated fuel profiles
Lee, David M.
1982-01-01
A new apparatus is used to substantially instantaneously obtain a profile of an object, for example a spent fuel assembly, which profile (when normalized) has unexpectedly been found to be substantially identical to the normalized profile of the burnup monitor Cs-137 obtained with a germanium detector. That profile can be used without normalization in a new method of identifying and monitoring in order to determine for example whether any of the fuel has been removed. Alternatively, two other new methods involve calibrating that profile so as to obtain a determination of fuel burnup (which is important for complying with safeguards requirements, for utilizing fuel to an optimal extent, and for storing spent fuel in a minimal amount of space). Using either of these two methods of determining burnup, one can reduce the required measurement time significantly (by more than an order of magnitude) over existing methods, yet retain equal or only slightly reduced accuracy.
Pelt, Daniël M.; Gürsoy, Doǧa; Palenstijn, Willem Jan; Sijbers, Jan; De Carlo, Francesco; Batenburg, Kees Joost
2016-01-01
The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it is shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy’s standard reconstruction method. PMID:27140167
Schmitz, C; Ansmann, L; Ernstmann, N
2015-07-01
Introduction: The importance of breast cancer patients (BPs) being supplied with sufficient information is well known. This study investigated the unfulfilled psychosocial information requirements of multimorbid BPs. Methods: This study records the unfulfilled psychosocial information requirements of 4166 patients, who were treated at one of the fifty breast centres in North Rhine Westphalia. The Cologne patient questionnaire for breast cancer 2.0 included in the postal survey following hospital stays records the information requirements using an adapted version of the "Cancer patient information needs" scale. Through a univariate analysis using the χ 2 test, it was investigated whether multimorbid BPs had significantly different psychosocial information requirements than BPs without further concomitant illnesses. Results: In general, it transpired that BPs had relatively low unfulfilled information requirements regarding work (20.7 %), everyday life (26.8 %), illness (27.4 %) and treatment (35.7 %), though such requirements were higher when it came to health-related behaviour (54.2 %). Multimorbid BPs had significantly lower unfulfilled information requirements regarding work and significantly larger ones regarding treatment in comparison to BPs without concomitant illnesses. Renal diseases and concomitant mental illnesses were associated with particularly high information requirements (p < 0.05). Conclusion: The results of our study should clarify the complexity and heterogeneity of information requirements of breast cancer patients in oncological care and should help to design the supply of information to be more patient-oriented.
Schmitz, C.; Ansmann, L.; Ernstmann, N.
2015-01-01
Introduction: The importance of breast cancer patients (BPs) being supplied with sufficient information is well known. This study investigated the unfulfilled psychosocial information requirements of multimorbid BPs. Methods: This study records the unfulfilled psychosocial information requirements of 4166 patients, who were treated at one of the fifty breast centres in North Rhine Westphalia. The Cologne patient questionnaire for breast cancer 2.0 included in the postal survey following hospital stays records the information requirements using an adapted version of the “Cancer patient information needs” scale. Through a univariate analysis using the χ2 test, it was investigated whether multimorbid BPs had significantly different psychosocial information requirements than BPs without further concomitant illnesses. Results: In general, it transpired that BPs had relatively low unfulfilled information requirements regarding work (20.7 %), everyday life (26.8 %), illness (27.4 %) and treatment (35.7 %), though such requirements were higher when it came to health-related behaviour (54.2 %). Multimorbid BPs had significantly lower unfulfilled information requirements regarding work and significantly larger ones regarding treatment in comparison to BPs without concomitant illnesses. Renal diseases and concomitant mental illnesses were associated with particularly high information requirements (p < 0.05). Conclusion: The results of our study should clarify the complexity and heterogeneity of information requirements of breast cancer patients in oncological care and should help to design the supply of information to be more patient-oriented. PMID:26257407
Song, Young Kyoung; Hong, Sang Hee; Jang, Mi; Han, Gi Myung; Rani, Manviri; Lee, Jongmyoung; Shim, Won Joon
2015-04-15
The analysis of microplastics in various environmental samples requires the identification of microplastics from natural materials. The identification technique lacks a standardized protocol. Herein, stereomicroscope and Fourier transform infrared spectroscope (FT-IR) identification methods for microplastics (<1mm) were compared using the same samples from the sea surface microlayer (SML) and beach sand. Fragmented microplastics were significantly (p<0.05) underestimated and fiber was significantly overestimated using the stereomicroscope both in the SML and beach samples. The total abundance by FT-IR was higher than by microscope both in the SML and beach samples, but they were not significantly (p>0.05) different. Depending on the number of samples and the microplastic size range of interest, the appropriate identification method should be determined; selecting a suitable identification method for microplastics is crucial for evaluating microplastic pollution. Copyright © 2015 Elsevier Ltd. All rights reserved.
Besalú, Emili
2016-01-01
The Superposing Significant Interaction Rules (SSIR) method is described. It is a general combinatorial and symbolic procedure able to rank compounds belonging to combinatorial analogue series. The procedure generates structure-activity relationship (SAR) models and also serves as an inverse SAR tool. The method is fast and can deal with large databases. SSIR operates from statistical significances calculated from the available library of compounds and according to the previously attached molecular labels of interest or non-interest. The required symbolic codification allows dealing with almost any combinatorial data set, even in a confidential manner, if desired. The application example categorizes molecules as binding or non-binding, and consensus ranking SAR models are generated from training and two distinct cross-validation methods: leave-one-out and balanced leave-two-out (BL2O), the latter being suited for the treatment of binary properties. PMID:27240346
Requirements for transportation of fast pyrolysis bio-oil in Finland
NASA Astrophysics Data System (ADS)
Karhunen, Antti; Laihanen, Mika; Ranta, Tapio
2016-11-01
The purpose of this paper is to discuss the requirements and challenges of pyrolysis oil's transportation in Finland. Pyrolysis oil is a new type of renewable liquid fuel that can be utilised in applications such as heat and electricity production. It has never been transported on a large scale in Finland. Possible options are transport by road, rail and waterway. The most significant requirements in its transportation are created by acidity and high density of pyrolysis oil, which impose requirements for the materials and transport equipment. The study described here shows that constant domestic transportation of pyrolysis oil is most reasonably operated with tank trucks. Rail-based transport may have potential for domestic fixed routes, and transport by water could be utilised in exporting. All transportation methods have limitations and advantages relative to each other. Ultimately, the production site and end-user's locations will determine the most suitable transport method.
Kinde, Hailu; Goodluck, Helen A; Pitesky, Maurice; Friend, Tom D; Campbell, James A; Hill, Ashley E
2015-12-01
Single swabs (cultured individually) are currently used in the Food and Drug Administration (FDA) official method for sampling the environment of commercial laying hens for the detection of Salmonella enterica ssp. serovar Enteritidis (Salmonella Enteritidis). The FDA has also granted provisional acceptance of the National Poultry Improvement Plan's (NPIP) Salmonella isolation and identification methodology for samples taken from table-egg layer flock environments. The NPIP method, as with the FDA method, requires single-swab culturing for the environmental sampling of laying houses for Salmonella Enteritidis. The FDA culture protocol requires a multistep culture enrichment broth, and it is more labor intensive than the NPIP culture protocol, which requires a single enrichment broth. The main objective of this study was to compare the FDA single-swab culturing protocol with that of the NPIP culturing protocol but using a four-swab pool scheme. Single and multi-laboratory testing of replicate manure drag swab sets (n = 525 and 672, respectively) collected from a Salmonella Enteritidis-free commercial poultry flock was performed by artificially contaminating swabs with either Salmonella Enteritidis phage type 4, 8, or 13a at one of two inoculation levels: low, x¯ = 2.5 CFU (range 2.5-2.7), or medium, x¯ = 10.0 CFU (range 7.5-12). For each replicate, a single swab (inoculated), sets of two swabs (one inoculated and one uninoculated), and sets of four swabs (one inoculated and three uninoculated), testing was conducted using the FDA or NPIP culture method. For swabs inoculated with phage type 8, the NPIP method was more efficient (P < 0.05) for all swab sets at both inoculation levels than the reference method. The single swabs in the NPIP method were significantly (P < 0.05) better than four-pool swabs in detecting Salmonella Enteritidis at the lower inoculation level. In the collaborative study (n = 13 labs) using Salmonella Enteritidis phage type 13a inoculated swabs, there was no significant difference (P > 0.05) between the FDA method (single swabs) and the pooled NPIP method (four-pool swabs). The study concludes that the pooled NPIP method is not significantly different from the FDA method for the detection of Salmonella Enteritidis in drag swabs in commercial poultry laying houses. Consequently based on the FDA's Salmonella Enteritidis rule for equivalency of different methods, the pooled NPIP method should be considered equivalent. Furthermore, the pooled NPIP method was more efficient and cost effective.
Sonko, Bakary J; Miller, Leland V; Jones, Richard H; Donnelly, Joseph E; Jacobsen, Dennis J; Hill, James O; Fennessey, Paul V
2003-12-15
Reducing water to hydrogen gas by zinc or uranium metal for determining D/H ratio is both tedious and time consuming. This has forced most energy metabolism investigators to use the "two-point" technique instead of the "Multi-point" technique for estimating total energy expenditure (TEE). Recently, we purchased a new platinum (Pt)-equilibration system that significantly reduces both time and labor required for D/H ratio determination. In this study, we compared TEE obtained from nine overweight but healthy subjects, estimated using the traditional Zn-reduction method to that obtained from the new Pt-equilibration system. Rate constants, pool spaces, and CO2 production rates obtained from use of the two methodologies were not significantly different. Correlation analysis demonstrated that TEEs estimated using the two methods were significantly correlated (r=0.925, p=0.0001). Sample equilibration time was reduced by 66% compared to those of similar methods. The data demonstrated that the Zn-reduction method could be replaced by the Pt-equilibration method when TEE was estimated using the "Multi-Point" technique. Furthermore, D equilibration time was significantly reduced.
Technical parameters for specifying imagery requirements
NASA Technical Reports Server (NTRS)
Coan, Paul P.; Dunnette, Sheri J.
1994-01-01
Providing visual information acquired from remote events to various operators, researchers, and practitioners has become progressively more important as the application of special skills in alien or hazardous situations increases. To provide an understanding of the technical parameters required to specify imagery, we have identified, defined, and discussed seven salient characteristics of images: spatial resolution, linearity, luminance resolution, spectral discrimination, temporal discrimination, edge definition, and signal-to-noise ratio. We then describe a generalizing imaging system and identified how various parts of the system affect the image data. To emphasize the different applications of imagery, we have constrasted the common television system with the significant parameters of a televisual imaging system for technical applications. Finally, we have established a method by which the required visual information can be specified by describing certain technical parameters which are directly related to the information content of the imagery. This method requires the user to complete a form listing all pertinent data requirements for the imagery.
Wahab, Bashirat A; Adebowale, Abdul-Rasaq A; Sanni, Silifat A; Sobukola, Olajide P; Obadina, Adewale O; Kajihausa, Olatundun E; Adegunwa, Mojisola O; Sanni, Lateef O; Tomlins, Keith
2016-01-01
The study investigated the functional properties of HQYF (high-quality yam flour) from tubers of four dioscorea species. The tubers were processed into HQYF using two pretreatments (potassium metabisulphite: 0.28%, 15 min; blanching: 70°C, 15 min) and drying methods (cabinet: 60°C, 48 h; sun drying: 3 days). Significant differences (P < 0.05) were observed in pasting characteristics of flours among the four species. The drying method significantly affected only the peak viscosity. The interactive effect of species, pretreatment, and drying methods on the functional properties was significant (P < 0.05) except for emulsification capacity, angle of repose, and least gelation concentration. The significant variation observed in most of the functional properties of the HQYF could contribute significantly to breeding programs of the yam species for diverse food applications. The pastes of flour from Dioscorea dumetorum pretreated with potassium metabisulphite and dried under a cabinet dryer were stable compared to other samples, hence will have better applications in products requiring lower retrogradation during freeze/thaw cycles.
Identification of significant features by the Global Mean Rank test.
Klammer, Martin; Dybowski, J Nikolaj; Hoffmann, Daniel; Schaab, Christoph
2014-01-01
With the introduction of omics-technologies such as transcriptomics and proteomics, numerous methods for the reliable identification of significantly regulated features (genes, proteins, etc.) have been developed. Experimental practice requires these tests to successfully deal with conditions such as small numbers of replicates, missing values, non-normally distributed expression levels, and non-identical distributions of features. With the MeanRank test we aimed at developing a test that performs robustly under these conditions, while favorably scaling with the number of replicates. The test proposed here is a global one-sample location test, which is based on the mean ranks across replicates, and internally estimates and controls the false discovery rate. Furthermore, missing data is accounted for without the need of imputation. In extensive simulations comparing MeanRank to other frequently used methods, we found that it performs well with small and large numbers of replicates, feature dependent variance between replicates, and variable regulation across features on simulation data and a recent two-color microarray spike-in dataset. The tests were then used to identify significant changes in the phosphoproteomes of cancer cells induced by the kinase inhibitors erlotinib and 3-MB-PP1 in two independently published mass spectrometry-based studies. MeanRank outperformed the other global rank-based methods applied in this study. Compared to the popular Significance Analysis of Microarrays and Linear Models for Microarray methods, MeanRank performed similar or better. Furthermore, MeanRank exhibits more consistent behavior regarding the degree of regulation and is robust against the choice of preprocessing methods. MeanRank does not require any imputation of missing values, is easy to understand, and yields results that are easy to interpret. The software implementing the algorithm is freely available for academic and commercial use.
Progress in Development of Methods in Bone Densitometry
NASA Technical Reports Server (NTRS)
Whedon, G. D.; Neumann, William F.; Jenkins, Dale W.
1966-01-01
The effects of weightlessness and decreased activity on the astronaut's musculoskeletal system during prolonged space flight, missions are of concern to NASA. This problem was anticipated from the knowledge that human subjects lose significant quantities of calcium from the skeleton during periods of bedrest, immobilization, and water immersion. An accurate method of measurement of the changes in mineral content of the skeleton is required not only in the space program but also in the biological, medical, and dental fields for mineral metabolism studies and for studying various pathological conditions of the skeleton and teeth. This method is a difficult one requiring the coordinated efforts of physiologists, biophysicists, radiologists, and clinicians. The densitometry methods reported in this conference which have been used or are being developed include X-ray, beta excited X-rays, radioisotopes, sonic vibration, and neutron activation analysis Studies in the Gemini, Biosatellite, and Apollo flights use the X-ray bone densitometry method which requires making X-rays before and after the flights. An in-flight method of bone densitometry would be valuable, and use of radioisotope sources has been suggested. Many advances in bone densitometry have been made in the last five years, and the urgency of the requirement makes this working conference timely and valuable. In such a rapidly developing field with investigators working independently in a variety of scientific disciplines, a working conference is of great value in exchanging information and ideas, critically evaluating approaches and methods, and pointing out new research pathways.
Hills, Rebecca A.; Allwes, Deborah; Rasmussen, Lisa
2013-01-01
Objectives Meningitis and bacteremia due to Neisseria meningitidis are rare but potentially deadly diseases that can be prevented with immunization. Beginning in 2008, Arizona school immunization requirements were amended to include immunization of children aged 11 years or older with meningococcal vaccine before entering the sixth grade. We describe patterns in meningococcal vaccine uptake surrounding these school-entry requirement changes in Arizona. Methods We used immunization records from the Arizona State Immunization Information System (ASIIS) to compare immunization rates in 11- and 12-year-olds. We used principal component analysis and hierarchical cluster analysis to identify and analyze demographic variables reported by the 2010 U.S. Census. Results Adolescent meningococcal immunization rates in Arizona increased after implementation of statewide school-entry immunization requirements. The increase in meningococcal vaccination rates among 11- and 12-year-olds from 2007 to 2008 was statistically significant (p<0.0001). All demographic groups had significantly higher odds of on-schedule vaccination after the school-entry requirement change (odds ratio range = 5.57 to 12.81, p<0.0001). County demographic factors that were associated with lower odds of on-schedule vaccination included higher poverty, more children younger than 18 years of age, fewer high school graduates, and a higher proportion of Native Americans. Conclusions This analysis suggests that implementation of school immunization requirements resulted in increased meningococcal vaccination rates in Arizona, with degree of response varying by demographic profile. ASIIS was useful for assessing changes in immunization rates over time. Further study is required to identify methods to control for population overestimates in registry data. PMID:23277658
47 CFR 76.54 - Significantly viewed signals; method to be followed for special showings.
Code of Federal Regulations, 2010 CFR
2010-10-01
... located, in whole or in part, and on all other system community units, franchisees, and franchise.... 339(d). (j) Notwithstanding the requirements of this section, the signal of a television broadcast...
Behavior of drilled shafts with high-strength reinforcement and casing.
DOT National Transportation Integrated Search
2015-09-01
Drilled shafts provide significant geotechnical resistance for support of highway bridges, and are used throughout the States of Oregon : and Washington to meet their structural foundation requirements. Due to changes in construction methods and poor...
Emerging technologies in medical applications of minimum volume vitrification
Zhang, Xiaohui; Catalano, Paolo N; Gurkan, Umut Atakan; Khimji, Imran; Demirci, Utkan
2011-01-01
Cell/tissue biopreservation has broad public health and socio-economic impact affecting millions of lives. Cryopreservation technologies provide an efficient way to preserve cells and tissues targeting the clinic for applications including reproductive medicine and organ transplantation. Among these technologies, vitrification has displayed significant improvement in post-thaw cell viability and function by eliminating harmful effects of ice crystal formation compared to the traditional slow freezing methods. However, high cryoprotectant agent concentrations are required, which induces toxicity and osmotic stress to cells and tissues. It has been shown that vitrification using small sample volumes (i.e., <1 μl) significantly increases cooling rates and hence reduces the required cryoprotectant agent levels. Recently, emerging nano- and micro-scale technologies have shown potential to manipulate picoliter to nanoliter sample sizes. Therefore, the synergistic integration of nanoscale technologies with cryogenics has the potential to improve biopreservation methods. PMID:21955080
Wynn, J.C.; Roseboom, E.H.
1987-01-01
Evaluation of potential high-level nuclear waste repository sites is an area where geophysical capabilities and limitations may significantly impact a major governmental program. Since there is concern that extensive exploratory drilling might degrade most potential disposal sites, geophysical methods become crucial as the only nondestructive means to examine large volumes of rock in three dimensions. Characterization of potential sites requires geophysicists to alter their usual mode of thinking: no longer are anomalies being sought, as in mineral exploration, but rather their absence. Thus the size of features that might go undetected by a particular method take on new significance. Legal and regulatory considerations that stem from this different outlook, most notably the requirements of quality assurance (necessary for any data used in support of a repository license application), are forcing changes in the manner in which geophysicists collect and document their data. -Authors
Methods of Phase and Power Control in Magnetron Transmitters for Superconducting Accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kazadevich, G.; Johnson, R.; Neubauer, M.
Various methods of phase and power control in magnetron RF sources of superconducting accelerators intended for ADS-class projects were recently developed and studied with conventional 2.45 GHz, 1 kW, CW magnetrons operating in pulsed and CW regimes. Magnetron transmitters excited by a resonant (injection-locking) phasemodulated signal can provide phase and power control with the rates required for precise stabilization of phase and amplitude of the accelerating field in Superconducting RF (SRF) cavities of the intensity-frontier accelerators. An innovative technique that can significantly increase the magnetron transmitter efficiency at the widerange power control required for superconducting accelerators was developed and verifiedmore » with the 2.45 GHz magnetrons operating in CW and pulsed regimes. High efficiency magnetron transmitters of this type can significantly reduce the capital and operation costs of the ADSclass accelerator projects.« less
Hui, Catherine; Joughin, Elaine; Nettel-Aguirre, Alberto; Goldstein, Simon; Harder, James; Kiefer, Gerhard; Parsons, David; Brauer, Carmen; Howard, Jason
2014-01-01
Background The Ponseti method of congenital idiopathic clubfoot correction has traditionally specified plaster of Paris (POP) as the cast material of choice; however, there are negative aspects to using POP. We sought to determine the influence of cast material (POP v. semirigid fibreglass [SRF]) on clubfoot correction using the Ponseti method. Methods Patients were randomized to POP or SRF before undergoing the Ponseti method. The primary outcome measure was the number of casts required for clubfoot correction. Secondary outcome measures included the number of casts by severity, ease of cast removal, need for Achilles tenotomy, brace compliance, deformity relapse, need for repeat casting and need for ancillary surgical procedures. Results We enrolled 30 patients: 12 randomized to POP and 18 to SRF. There was no difference in the number of casts required for clubfoot correction between the groups (p = 0.13). According to parents, removal of POP was more difficult (p < 0.001), more time consuming (p < 0.001) and required more than 1 method (p < 0.001). At a final follow-up of 30.8 months, the mean times to deformity relapse requiring repeat casting, surgery or both were 18.7 and 16.4 months for the SRF and POP groups, respectively. Conclusion There was no significant difference in the number of casts required for correction of clubfoot between the 2 materials, but SRF resulted in a more favourable parental experience, which cannot be ignored as it may have a positive impact on psychological well-being despite the increased cost associated. PMID:25078929
Formulation of a dynamic analysis method for a generic family of hoop-mast antenna systems
NASA Technical Reports Server (NTRS)
Gabriele, A.; Loewy, R.
1981-01-01
Analytical studies of mast-cable-hoop-membrane type antennas were conducted using a transfer matrix numerical analysis approach. This method, by virtue of its specialization and the inherently easy compartmentalization of the formulation and numerical procedures, can be significantly more efficient in computer time required and in the time needed to review and interpret the results.
Quantifying the Thermal Fatigue of CPV Modules
NASA Astrophysics Data System (ADS)
Bosco, Nick; Kurtz, Sarah
2010-10-01
A method is presented to quantify thermal fatigue in the CPV die-attach from meteorological data. A comparative study between cities demonstrates a significant difference in the accumulated damage. These differences are most sensitive to the number of larger (ΔT) thermal cycles experienced for a location. High frequency data (<1/min) may be required to most accurately employ this method.
Liu, Liehua; Cheng, Shiming; Lu, Rui; Zhou, Qiang
2016-01-01
Aim. This report introduces extrapedicular infiltration anesthesia as an improved method of local anesthesia for unipedicular percutaneous vertebroplasty or percutaneous kyphoplasty. Method. From March 2015 to March 2016, 44 patients (11 males and 33 females) with osteoporotic vertebral compression fractures with a mean age of 71.4 ± 8.8 years (range: 60 to 89) received percutaneous vertebroplasty or percutaneous kyphoplasty. 24 patients were managed with conventional local infiltration anesthesia (CLIA) and 20 patients with both CLIA and extrapedicular infiltration anesthesia (EPIA). Patients evaluated intraoperative pain by means of the visual analogue score and were monitored during the procedure for additional sedative analgesia needs and for adverse nerve root effects. Results. VAS of CLIA + EPIA and CLIA group was 2.5 ± 0.7 and 4.3 ± 1.0, respectively, and there was significant difference ( P = 0.001). In CLIA group, 1 patient required additional sedative analgesia, but in CLIA + EPIA group, no patients required that. In the two groups, no adverse nerve root effects were noted. Summary. Extrapedicular infiltration anesthesia provided good local anesthetic effects without significant complications. This method deserves further consideration for use in unipedicular percutaneous vertebroplasty and percutaneous kyphoplasty.
Optimization of Typological Requirements for Low-Cost Detached Houses
NASA Astrophysics Data System (ADS)
Kuráň, Jozef
2017-09-01
The presented paper deals with an analysis of the legislative, hygienic, functional and operational requirements for the design of detached houses and individual dwellings in terms of typological requirements. The article also presents a sociological survey about the preferences and subjective requirements of relevant public group segments in terms of living in a detached house or an individual dwelling. The aim of the paper is to define the possibilities for the optimization of typological requirements. The optimization methods are based on principles already applied to contemporary detached house preferences and trends. The main idea is to reduce the amount of floor space, thus lowering construction and operating costs. The goal is to design an optimized floor plan, while preserving the hygienic criteria for individual residential dwellings. By applying optimization methods, a so-called rationalized and conditioned floor plan results in an individual dwelling floor plan design that can be compared to a reference model with an accurate quantification comparison. The significant sources of research are the legislative and normative requirements in the field of house construction in Slovakia, the Czech Republic and abroad.
NASA Technical Reports Server (NTRS)
Schuster, David M.
2008-01-01
Over the past three years, the National Aeronautics and Space Administration (NASA) has initiated design, development, and testing of a new human-rated space exploration system under the Constellation Program. Initial designs within the Constellation Program are scheduled to replace the present Space Shuttle, which is slated for retirement within the next three years. The development of vehicles for the Constellation system has encountered several unsteady aerodynamics challenges that have bearing on more traditional unsteady aerodynamic and aeroelastic analysis. This paper focuses on the synergy between the present NASA challenges and the ongoing challenges that have historically been the subject of research and method development. There are specific similarities in the flows required to be analyzed for the space exploration problems and those required for some of the more nonlinear unsteady aerodynamic and aeroelastic problems encountered on aircraft. The aggressive schedule, significant technical challenge, and high-priority status of the exploration system development is forcing engineers to implement existing tools and techniques in a design and application environment that is significantly stretching the capability of their methods. While these methods afford the users with the ability to rapidly turn around designs and analyses, their aggressive implementation comes at a price. The relative immaturity of the techniques for specific flow problems and the inexperience with their broad application to them, particularly on manned spacecraft flight system, has resulted in the implementation of an extensive wind tunnel and flight test program to reduce uncertainty and improve the experience base in the application of these methods. This provides a unique opportunity for unsteady aerodynamics and aeroelastic method developers to test and evaluate new analysis techniques on problems with high potential for acquisition of test and even flight data against which they can be evaluated. However, researchers may be required to alter the geometries typically used in their analyses, the types of flows analyzed, and even the techniques by which computational tools are verified and validated. This paper discusses these issues and provides some perspective on the potential for new and innovative approaches to the development of methods to attack problems in nonlinear unsteady aerodynamics.
Griffiths, Nia W; Wyatt, Mark F; Kean, Suzanna D; Graham, Andrew E; Stein, Bridget K; Brenton, A Gareth
2010-06-15
A method for the accurate mass measurement of positive radical ions by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-TOFMS) is described. Initial use of a conjugated oligomeric calibration material was rejected in favour of a series of meso-tetraalkyl/tetraalkylaryl-functionalised porphyrins, from which the two calibrants required for a particular accurate mass measurement were chosen. While all measurements of monoisotopic species were within +/-5 ppm, and the method was rigorously validated using chemometrics, mean values of five measurements were used for extra confidence in the generation of potential elemental formulae. Potential difficulties encountered when measuring compounds containing multi-isotopic elements are discussed, where the monoisotopic peak is no longer the lowest mass peak, and a simple mass-correction solution can be applied. The method requires no significant expertise to implement, but care and attention is required to obtain valid measurements. The method is operationally simple and will prove useful to the analytical chemistry community. Copyright (c) 2010 John Wiley & Sons, Ltd.
A novel heat engine for magnetizing superconductors
NASA Astrophysics Data System (ADS)
Coombs, T. A.; Hong, Z.; Zhu, X.; Krabbes, G.
2008-03-01
The potential of bulk melt-processed YBCO single domains to trap significant magnetic fields (Tomita and Murakami 2003 Nature 421 517-20 Fuchs et al 2000 Appl. Phys. Lett. 76 2107-9) at cryogenic temperatures makes them particularly attractive for a variety of engineering applications including superconducting magnets, magnetic bearings and motors (Coombs et al 1999 IEEE Trans. Appl. Supercond. 9 968-71 Coombs et al 2005 IEEE Trans. Appl. Supercond. 15 2312-5). It has already been shown that large fields can be obtained in single domain samples at 77 K. A range of possible applications exist in the design of high power density electric motors (Jiang et al 2006 Supercond. Sci. Technol. 19 1164-8). Before such devices can be created a major problem needs to be overcome. Even though all of these devices use a superconductor in the role of a permanent magnet and even though the superconductor can trap potentially huge magnetic fields (greater than 10 T) the problem is how to induce the magnetic fields. There are four possible known methods: (1) cooling in field; (2) zero field cooling, followed by slowly applied field; (3) pulse magnetization; (4) flux pumping. Any of these methods could be used to magnetize the superconductor and this may be done either in situ or ex situ. Ideally the superconductors are magnetized in situ. There are several reasons for this: first, if the superconductors should become demagnetized through (i) flux creep, (ii) repeatedly applied perpendicular fields (Vanderbemden et al 2007 Phys. Rev. B 75 (17)) or (iii) by loss of cooling then they may be re-magnetized without the need to disassemble the machine; secondly, there are difficulties with handling very strongly magnetized material at cryogenic temperatures when assembling the machine; thirdly, ex situ methods would require the machine to be assembled both cold and pre-magnetized and would offer significant design difficulties. Until room temperature superconductors can be prepared, the most efficient design of machine will therefore be one in which an in situ magnetizing fixture is included. The first three methods all require a solenoid which can be switched on and off. In the first method an applied magnetic field is required equal to the required magnetic field, whilst the second and third approaches require fields at least two times greater. The final method, however, offers significant advantages since it achieves the final required field by repeated applications of a small field and can utilize a permanent magnet (Coombs 2007 British Patent GB2431519 granted 2007-09-26). If we wish to pulse a field using, say, a 10 T magnet to magnetize a 30 mm × 10 mm sample then we can work out how big the solenoid needs to be. If it were possible to wind an appropriate coil using YBCO tape then, assuming an Ic of 70 A and a thickness of 100 µm, we would have 100 turns and 7000 A turns. This would produce a B field of approximately 7000/(20 × 10-3) × 4π × 10-7 = 0.4 T. To produce 10 T would require pulsing to 1400 A! An alternative calculation would be to assume a Jc of say 5 × 108A m-1 and a coil 1 cm2 in cross section. The field would then be 5 × 108 × 10-2 × (2 × 4π × 10-7) = 10 T. Clearly if the magnetization fixture is not to occupy more room than the puck itself then a very high activation current would be required and either constraint makes in situ magnetization a very difficult proposition. What is required for in situ magnetization is a magnetization method in which a relatively small field of the order of millitesla repeatedly applied is used to magnetize the superconductor. This paper describes a novel method for achieving this.
NASA Astrophysics Data System (ADS)
Delgado, Carlos; Cátedra, Manuel Felipe
2018-05-01
This work presents a technique that allows a very noticeable relaxation of the computational requirements for full-wave electromagnetic simulations based on the Method of Moments. A ray-tracing analysis of the geometry is performed in order to extract the critical points with significant contributions. These points are then used to generate a reduced mesh, considering the regions of the geometry that surround each critical point and taking into account the electrical path followed from the source. The electromagnetic analysis of the reduced mesh produces very accurate results, requiring a fraction of the resources that the conventional analysis would utilize.
Using neural networks to represent potential surfaces as sums of products.
Manzhos, Sergei; Carrington, Tucker
2006-11-21
By using exponential activation functions with a neural network (NN) method we show that it is possible to fit potentials to a sum-of-products form. The sum-of-products form is desirable because it reduces the cost of doing the quadratures required for quantum dynamics calculations. It also greatly facilitates the use of the multiconfiguration time dependent Hartree method. Unlike potfit product representation algorithm, the new NN approach does not require using a grid of points. It also produces sum-of-products potentials with fewer terms. As the number of dimensions is increased, we expect the advantages of the exponential NN idea to become more significant.
NASA Technical Reports Server (NTRS)
Wilt, T. E.
1995-01-01
The Generalized Method of Cells (GMC), a micromechanics based constitutive model, is implemented into the finite element code MARC using the user subroutine HYPELA. Comparisons in terms of transverse deformation response, micro stress and strain distributions, and required CPU time are presented for GMC and finite element models of fiber/matrix unit cell. GMC is shown to provide comparable predictions of the composite behavior and requires significantly less CPU time as compared to a finite element analysis of the unit cell. Details as to the organization of the HYPELA code are provided with the actual HYPELA code included in the appendix.
Profitable capitation requires accurate costing.
West, D A; Hicks, L L; Balas, E A; West, T D
1996-01-01
In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages.
A multiresolution halftoning algorithm for progressive display
NASA Astrophysics Data System (ADS)
Mukherjee, Mithun; Sharma, Gaurav
2005-01-01
We describe and implement an algorithmic framework for memory efficient, 'on-the-fly' halftoning in a progressive transmission environment. Instead of a conventional approach which repeatedly recalls the continuous tone image from memory and subsequently halftones it for display, the proposed method achieves significant memory efficiency by storing only the halftoned image and updating it in response to additional information received through progressive transmission. Thus the method requires only a single frame-buffer of bits for storage of the displayed binary image and no additional storage is required for the contone data. The additional image data received through progressive transmission is accommodated through in-place updates of the buffer. The method is thus particularly advantageous for high resolution bi-level displays where it can result in significant savings in memory. The proposed framework is implemented using a suitable multi-resolution, multi-level modification of error diffusion that is motivated by the presence of a single binary frame-buffer. Aggregates of individual display bits constitute the multiple output levels at a given resolution. This creates a natural progression of increasing resolution with decreasing bit-depth.
A simple, less invasive stripper micropipetter-based technique for day 3 embryo biopsy.
Cedillo, Luciano; Ocampo-Bárcenas, Azucena; Maldonado, Israel; Valdez-Morales, Francisco J; Camargo, Felipe; López-Bayghen, Esther
2016-01-01
Preimplantation genetic screening (PGS) is an important procedure for in vitro fertilization (IVF). A key step of PGS, blastomere removal, is abundant with many technical issues. The aim of this study was to compare a more simple procedure based on the Stipper Micropipetter, named S-biopsy, to the conventional aspiration method. On Day 3, 368 high-quality embryos (>7 cells on Day3 with <10% fragmentation) were collected from 38 women. For each patient, their embryos were equally separated between the conventional method ( n = 188) and S-biopsy method ( n = 180). The conventional method was performed using a standardized protocol. For the S-biopsy method, a laser was used to remove a significantly smaller portion of the zona pellucida. Afterwards, the complete embryo was aspirated with a Stripper Micropipetter, forcing the removal of the blastomere. Selected blastomeres went to PGS using CGH microarrays. Embryo integrity and blastocyst formation were assessed on Day 5. Differences between groups were assessed by either the Mann-Whitney test or Fisher Exact test. Both methods resulted in the removal of only one blastomere. The S-biopsy and the conventional method did not differ in terms of affecting embryo integrity (95.0% vs. 95.7%) or blastocyst formation (72.7% vs. 70.7%). PGS analysis indicated that aneuploidy rate were similar between the two methods (63.1% vs. 65.2%). However, the time required to perform the S-biopsy method (179.2 ± 17.5 s) was significantly shorter (5-fold) than the conventional method. The S-biopsy method is comparable to the conventional method that is used to remove a blastomere for PGS, but requires less time. Furthermore, due to the simplicity of the S-biopsy technique, this method is more ideal for IVF laboratories.
Luebker, Stephen A; Wojtkiewicz, Melinda; Koepsell, Scott A
2015-11-01
Formalin-fixed paraffin-embedded (FFPE) tissue is a rich source of clinically relevant material that can yield important translational biomarker discovery using proteomic analysis. Protocols for analyzing FFPE tissue by LC-MS/MS exist, but standardization of procedures and critical analysis of data quality is limited. This study compared and characterized data obtained from FFPE tissue using two methods: a urea in-solution digestion method (UISD) versus a commercially available Qproteome FFPE Tissue Kit method (Qkit). Each method was performed independently three times on serial sections of homogenous FFPE tissue to minimize pre-analytical variations and analyzed with three technical replicates by LC-MS/MS. Data were evaluated for reproducibility and physiochemical distribution, which highlighted differences in the ability of each method to identify proteins of different molecular weights and isoelectric points. Each method replicate resulted in a significant number of new protein identifications, and both methods identified significantly more proteins using three technical replicates as compared to only two. UISD was cheaper, required less time, and introduced significant protein modifications as compared to the Qkit method, which provided more precise and higher protein yields. These data highlight significant variability among method replicates and type of method used, despite minimizing pre-analytical variability. Utilization of only one method or too few replicates (both method and technical) may limit the subset of proteomic information obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Zhuang, H; Savage, E M
2008-10-01
Quality assessment results of cooked meat can be significantly affected by sample preparation with different cooking techniques. A combi oven is a relatively new cooking technique in the U.S. market. However, there was a lack of published data about its effect on quality measurements of chicken meat. Broiler breast fillets deboned at 24-h postmortem were cooked with one of the 3 methods to the core temperature of 80 degrees C. Cooking methods were evaluated based on cooking operation requirements, sensory profiles, Warner-Bratzler (WB) shear and cooking loss. Our results show that the average cooking time for the combi oven was 17 min compared with 31 min for the commercial oven method and 16 min for the hot water method. The combi oven did not result in a significant difference in the WB shear force values, although the cooking loss of the combi oven samples was significantly lower than the commercial oven and hot water samples. Sensory profiles of the combi oven samples did not significantly differ from those of the commercial oven and hot water samples. These results demonstrate that combi oven cooking did not significantly affect sensory profiles and WB shear force measurements of chicken breast muscle compared to the other 2 cooking methods. The combi oven method appears to be an acceptable alternative for preparing chicken breast fillets in a quality assessment.
Treating nailbiting: a comparative analysis of mild aversion and competing response therapies.
Silber, K P; Haynes, C E
1992-01-01
This study compared two methods of treating nail-biting. One method involved the use of a mild aversive stimulus in which subjects painted a bitter substance on their nails, and the other required the subject to perform a competing response whenever they had the urge to bite or found themselves biting their nails. Both methods included self-monitoring of the behaviour, and a third group of subjects performed self-monitoring alone as a control condition. The study lasted four weeks. Twenty-one subjects, seven per group, participated. Both methods resulted in significant improvements in nail length, with the competing response method showing the most beneficial effect. There was no significant improvement for the control group. The competing response condition also yielded significant improvements along other dimensions such as degree of skin damage and subjects own ratings of their control over their habit. These were not seen for the other two conditions. The benefits of this abridged version of Azrin and Nunn's (Behaviour Research and Therapy, 11, 619-628, 1973) habit reversal method in terms of treatment success, use of therapist time and client satisfaction, are discussed.
Thakur, Nikhil A; Crisco, Joseph J; Moore, Douglas C; Froehlich, John A; Limbird, Richard S; Bliss, James M
2010-02-01
This study proposes a novel method for reattachment of the trochanteric slide osteotomy. The strength of this new fixation system was compared to established configurations. Fifteen sawbone femurs were used. Our configuration used cables above and below the lesser trochanter with a third cable around the shaft of the femur while passing the loose ends through the inferior hole of the cable grip. Displacement of the trochanter was measured with increasing load. Force required for catastrophic failure was also measured. The 3-cable construct resulted in significantly less displacement with increasing load and required a larger force to cause failure (1 cm and 2 cm). We theorize that our configuration produces a biomechanically stronger construct than previously used methods. 2010 Elsevier Inc. All rights reserved.
A New Calibration Method for Commercial RGB-D Sensors.
Darwish, Walid; Tang, Shenjun; Li, Wenbin; Chen, Wu
2017-05-24
Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter‑level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges.
Thakur, Nikhil A.; Crisco, Joseph J.; Moore, Douglas C.; Froehlich, John A.; Limbird, Richard S.; Bliss, James M.
2017-01-01
This study proposes a novel method for reattachment of the trochanteric slide osteotomy. The strength of this new fixation system was compared to established configurations. Fifteen sawbone femurs were used. Our configuration used cables above and below the lesser trochanter with a third cable around the shaft of the femur while passing the loose ends through the inferior hole of the cable grip. Displacement of the trochanter was measured with increasing load. Force required for catastrophic failure was also measured. The 3-cable construct resulted in significantly less displacement with increasing load and required a larger force to cause failure (1 cm and 2 cm). We theorize that our configuration produces a biomechanically stronger construct than previously used methods. PMID:19062247
Bicarbonate trigger for inducing lipid accumulation in algal systems
Gardner, Robert; Peyton, Brent; Cooksey, Keith E.
2015-08-04
The present invention provides bicarbonate containing and/or bicarbonate-producing compositions and methods to induce lipid accumulation in an algae growth system, wherein the algae growth system is under light-dark cycling condition. By adding said compositions at a specific growth stage, said methods lead to much higher lipid accumulation and/or significantly reduced total time required for accumulating lipid in the algae growth system.
Solving Ordinary Differential Equations
NASA Technical Reports Server (NTRS)
Krogh, F. T.
1987-01-01
Initial-value ordinary differential equation solution via variable order Adams method (SIVA/DIVA) package is collection of subroutines for solution of nonstiff ordinary differential equations. There are versions for single-precision and double-precision arithmetic. Requires fewer evaluations of derivatives than other variable-order Adams predictor/ corrector methods. Option for direct integration of second-order equations makes integration of trajectory problems significantly more efficient. Written in FORTRAN 77.
Ethanol production from lignocellulose
Ingram, Lonnie O.; Wood, Brent E.
2001-01-01
This invention presents a method of improving enzymatic degradation of lignocellulose, as in the production of ethanol from lignocellulosic material, through the use of ultrasonic treatment. The invention shows that ultrasonic treatment reduces cellulase requirements by 1/3 to 1/2. With the cost of enzymes being a major problem in the cost-effective production of ethanol from lignocellulosic material, this invention presents a significant improvement over presently available methods.
A comparison of VRML and animation of rotation for teaching 3-dimensional crystal lattice structures
NASA Astrophysics Data System (ADS)
Sauls, Barbara Lynn
Chemistry students often have difficulty visualizing abstract concepts of molecules and atoms, which may lead to misconceptions. The three-dimensionality of these structures presents a challenge to educators. Typical methods of teaching include text with two-dimensional graphics and structural models. Improved methods to allow visualization of 3D structures may improve learning of these concepts. This research compared the use of Virtual Reality Modeling Language (VRML) and animation of rotation for teaching three-dimensional structures. VRML allows full control of objects by altering angle, size, rotation, and provides the ability to zoom into and through objects. Animations may only be stopped, restarted and replayed. A web-based lesson teaching basic concepts of crystals, which requires comprehension of their three-dimensional structure was given to 100 freshmen chemistry students. Students were stratified by gender then randomly to one of two lessons, which were identical except for the multimedia method used to show the lattices and unit cells. One method required exploration of the structures using VRML, the other provided animations of the same structures rotating. The students worked through an examination as the lesson progressed. A Welch t' test was used to compare differences between groups. No significant difference in mean achievement was found between the two methods, between genders, or within gender. There was no significant difference in mean total SAT in the animation and VRML group. Total time on task had no significant difference nor did enjoyment of the lesson. Students, however, spent 14% less time maneuvering VRML structures than viewing the animations of rotation. Neither method proved superior for presenting three-dimensional information. The students spent less time maneuvering the VRML structures with no difference in mean score so the use of VRML may be more efficient. The investigator noted some manipulation difficulties using VRML to rotate structures. Some students had difficulty obtaining the correct angle required to properly interpret spatial relationships. This led to frustration and caused some students to quit trying before they could answer questions fully. Even though there were some difficulties, outcomes were not affected. Higher scores, however, may have been achieved had the students been proficient in VRML maneuvering.
A theoretically based determination of bowen-ratio fetch requirements
Stannard, D.I.
1997-01-01
Determination of fetch requirements for accurate Bowen-ratio measurements of latent- and sensible-heat fluxes is more involved than for eddy-correlation measurements because Bowen-ratio sensors are located at two heights, rather than just one. A simple solution to the diffusion equation is used to derive an expression for Bowen-ratio fetch requirements, downwind of a step change in surface fluxes. These requirements are then compared to eddy-correlation fetch requirements based on the same diffusion equation solution. When the eddy-correlation and upper Bowen-ratio sensor heights are equal, and the available energy upwind and downwind of the step change is constant, the Bowen-ratio method requires less fetch than does eddy correlation. Differences in fetch requirements between the two methods are greatest over relatively smooth surfaces. Bowen-ratio fetch can be reduced significantly by lowering the lower sensor, as well as the upper sensor. The Bowen-ratio fetch model was tested using data from a field experiment where multiple Bowen-ratio systems were deployed simultaneously at various fetches and heights above a field of bermudagrass. Initial comparisons were poor, but improved greatly when the model was modified (and operated numerically) to account for the large roughness of the upwind cotton field.
The sensitivity of an hydroponic lettuce root elongation bioassay to metals, phenol and wastewaters.
Park, Jihae; Yoon, Jeong-Hyun; Depuydt, Stephen; Oh, Jung-Woo; Jo, Youn-Min; Kim, Kyungtae; Brown, Murray T; Han, Taejun
2016-04-01
The root elongation bioassay is one of the most straightforward test methods used for environmental monitoring in terms of simplicity, rapidity and economy since it merely requires filter paper, distilled water and Petri dishes. However, filter paper as a support material is known to be problematic as it can reduce the sensitivity of the test. The newly developed hydroponic method reported here differs from the conventional root elongation method (US EPA filter paper method) in that no support material is used and the exposure time is shorter (48 h in this test versus 120 h in the US EPA test). For metals, the hydroponic test method was 3.3 (for Hg) to 57 (for Cu) times more sensitive than the US EPA method with the rank orders of sensitivity, estimated from EC50 values, being Cu≥Cd>Ni≥Zn≥Hg for the former and Hg≥Cu≥Ni≥Cd≥Zn for the latter methods. For phenol, the results did not differ significantly; EC50 values were 124 mg L(-1) and 108-180 mg L(-1) for the hydroponic and filter paper methods, respectively. Lettuce was less sensitive than daphnids to wastewaters, but the root elongation response appears to be wastewater-specific and is especially sensitive for detecting the presence of fluorine. The new hydroponic test thus provides many practical advantages, especially in terms of cost and time-effectiveness requiring only a well plate, a small volume of distilled water and short exposure period; furthermore, no specialist expertise is required. The method is simpler than the conventional EPA technique in not using filter paper which can influence the sensitivity of the test. Additionally, plant seeds have a long shelf-life and require little or no maintenance. Copyright © 2015 Elsevier Inc. All rights reserved.
Analysis of drugs in human tissues by supercritical fluid extraction/immunoassay
NASA Astrophysics Data System (ADS)
Furton, Kenneth G.; Sabucedo, Alberta; Rein, Joseph; Hearn, W. L.
1997-02-01
A rapid, readily automated method has been developed for the quantitative analysis of phenobarbital from human liver tissues based on supercritical carbon dioxide extraction followed by fluorescence enzyme immunoassay. The method developed significantly reduces sample handling and utilizes the entire liver homogenate. The current method yields comparable recoveries and precision and does not require the use of an internal standard, although traditional GC/MS confirmation can still be performed on sample extracts. Additionally, the proposed method uses non-toxic, inexpensive carbon dioxide, thus eliminating the use of halogenated organic solvents.
Storelli, L; Pagani, E; Rocca, M A; Horsfield, M A; Gallo, A; Bisecco, A; Battaglini, M; De Stefano, N; Vrenken, H; Thomas, D L; Mancini, L; Ropele, S; Enzinger, C; Preziosa, P; Filippi, M
2016-07-21
The automatic segmentation of MS lesions could reduce time required for image processing together with inter- and intraoperator variability for research and clinical trials. A multicenter validation of a proposed semiautomatic method for hyperintense MS lesion segmentation on dual-echo MR imaging is presented. The classification technique used is based on a region-growing approach starting from manual lesion identification by an expert observer with a final segmentation-refinement step. The method was validated in a cohort of 52 patients with relapsing-remitting MS, with dual-echo images acquired in 6 different European centers. We found a mathematic expression that made the optimization of the method independent of the need for a training dataset. The automatic segmentation was in good agreement with the manual segmentation (dice similarity coefficient = 0.62 and root mean square error = 2 mL). Assessment of the segmentation errors showed no significant differences in algorithm performance between the different MR scanner manufacturers (P > .05). The method proved to be robust, and no center-specific training of the algorithm was required, offering the possibility for application in a clinical setting. Adoption of the method should lead to improved reliability and less operator time required for image analysis in research and clinical trials in MS. © 2016 American Society of Neuroradiology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brock, Billy C.
The measurement of the radiation characteristics of an antenna on a near-field range requires that the antenna under test be located very close to the near-field probe. Although the direct coupling is utilized for characterizing the near field, this close proximity also presents the opportunity for significant undesired interactions (for example, reflections) to occur between the antenna and the near-field probe. When uncompensated, these additional interactions will introduce error into the measurement, increasing the uncertainty in the final gain pattern obtained through the near-field-to-far-field transformation. Quantifying this gain-uncertainty contribution requires quantifying the various additional interactions. A method incorporating spatial-frequency analysismore » is described which allows the dominant interaction contributions to be easily identified and quantified. In addition to identifying the additional antenna-to-probe interactions, the method also allows identification and quantification of interactions with other nearby objects within the measurement room. Because the method is a spatial-frequency method, wide-bandwidth data is not required, and it can be applied even when data is available at only a single temporal frequency. This feature ensures that the method can be applied to narrow-band antennas, where a similar time-domain analysis would not be possible. - 3 - - 4 -« less
Iterative wave-front reconstruction in the Fourier domain.
Bond, Charlotte Z; Correia, Carlos M; Sauvage, Jean-François; Neichel, Benoit; Fusco, Thierry
2017-05-15
The use of Fourier methods in wave-front reconstruction can significantly reduce the computation time for large telescopes with a high number of degrees of freedom. However, Fourier algorithms for discrete data require a rectangular data set which conform to specific boundary requirements, whereas wave-front sensor data is typically defined over a circular domain (the telescope pupil). Here we present an iterative Gerchberg routine modified for the purposes of discrete wave-front reconstruction which adapts the measurement data (wave-front sensor slopes) for Fourier analysis, fulfilling the requirements of the fast Fourier transform (FFT) and providing accurate reconstruction. The routine is used in the adaptation step only and can be coupled to any other Wiener-like or least-squares method. We compare simulations using this method with previous Fourier methods and show an increase in performance in terms of Strehl ratio and a reduction in noise propagation for a 40×40 SPHERE-like adaptive optics system. For closed loop operation with minimal iterations the Gerchberg method provides an improvement in Strehl, from 95.4% to 96.9% in K-band. This corresponds to ~ 40 nm improvement in rms, and avoids the high spatial frequency errors present in other methods, providing an increase in contrast towards the edge of the correctable band.
Methods for Prediction of High-Speed Reacting Flows in Aerospace Propulsion
NASA Technical Reports Server (NTRS)
Drummond, J. Philip
2014-01-01
Research to develop high-speed airbreathing aerospace propulsion systems was underway in the late 1950s. A major part of the effort involved the supersonic combustion ramjet, or scramjet, engine. Work had also begun to develop computational techniques for solving the equations governing the flow through a scramjet engine. However, scramjet technology and the computational methods to assist in its evolution would remain apart for another decade. The principal barrier was that the computational methods needed for engine evolution lacked the computer technology required for solving the discrete equations resulting from the numerical methods. Even today, computer resources remain a major pacing item in overcoming this barrier. Significant advances have been made over the past 35 years, however, in modeling the supersonic chemically reacting flow in a scramjet combustor. To see how scramjet development and the required computational tools finally merged, we briefly trace the evolution of the technology in both areas.
Simpson, John; Raith, Andrea; Rouse, Paul; Ehrgott, Matthias
2017-10-09
Purpose The operations research method of data envelopment analysis (DEA) shows promise for assessing radiotherapy treatment plan quality. The purpose of this paper is to consider the technical requirements for using DEA for plan assessment. Design/methodology/approach In total, 41 prostate treatment plans were retrospectively analysed using the DEA method. The authors investigate the impact of DEA weight restrictions with reference to the ability to differentiate plan performance at a level of clinical significance. Patient geometry influences plan quality and the authors compare differing approaches for managing patient geometry within the DEA method. Findings The input-oriented DEA method is the method of choice when performing plan analysis using the key undesirable plan metrics as the DEA inputs. When considering multiple inputs, it is necessary to constrain the DEA input weights in order to identify potential plan improvements at a level of clinical significance. All tested approaches for the consideration of patient geometry yielded consistent results. Research limitations/implications This work is based on prostate plans and individual recommendations would therefore need to be validated for other treatment sites. Notwithstanding, the method that requires both optimised DEA weights according to clinical significance and appropriate accounting for patient geometric factors is universally applicable. Practical implications DEA can potentially be used during treatment plan development to guide the planning process or alternatively used retrospectively for treatment plan quality audit. Social implications DEA is independent of the planning system platform and therefore has the potential to be used for multi-institutional quality audit. Originality/value To the authors' knowledge, this is the first published examination of the optimal approach in the use of DEA for radiotherapy treatment plan assessment.
Joint Contracture Orthosis (JCO)
NASA Technical Reports Server (NTRS)
Lunsford, Thomas R.; Parsons, Ken; Krouskop, Thomas; McGee, Kevin
1997-01-01
The purpose of this project was to develop an advanced orthosis which is effective in reducing upper and lower limb contractures in significantly less time than currently required with conventional methods. The team that developed the JCO consisted of an engineer, orthotist, therapist, and physician.
NASA Technical Reports Server (NTRS)
Martos, Borja; Kiszely, Paul; Foster, John V.
2011-01-01
As part of the NASA Aviation Safety Program (AvSP), a novel pitot-static calibration method was developed to allow rapid in-flight calibration for subscale aircraft while flying within confined test areas. This approach uses Global Positioning System (GPS) technology coupled with modern system identification methods that rapidly computes optimal pressure error models over a range of airspeed with defined confidence bounds. This method has been demonstrated in subscale flight tests and has shown small 2- error bounds with significant reduction in test time compared to other methods. The current research was motivated by the desire to further evaluate and develop this method for full-scale aircraft. A goal of this research was to develop an accurate calibration method that enables reductions in test equipment and flight time, thus reducing costs. The approach involved analysis of data acquisition requirements, development of efficient flight patterns, and analysis of pressure error models based on system identification methods. Flight tests were conducted at The University of Tennessee Space Institute (UTSI) utilizing an instrumented Piper Navajo research aircraft. In addition, the UTSI engineering flight simulator was used to investigate test maneuver requirements and handling qualities issues associated with this technique. This paper provides a summary of piloted simulation and flight test results that illustrates the performance and capabilities of the NASA calibration method. Discussion of maneuver requirements and data analysis methods is included as well as recommendations for piloting technique.
Hazut, Koren; Romem, Pnina; Malkin, Smadar; Livshiz-Riven, Ilana
2016-12-01
The purpose of this study was to compare the predictive validity, economic efficiency, and faculty staff satisfaction of a computerized test versus a personal interview as admission methods for graduate nursing studies. A mixed method study was designed, including cross-sectional and retrospective cohorts, interviews, and cost analysis. One hundred and thirty-four students in the Master of Nursing program participated. The success of students in required core courses was similar in both admission method groups. The personal interview method was found to be a significant predictor of success, with cognitive variables the only significant contributors to the model. Higher satisfaction levels were reported with the computerized test compared with the personal interview method. The cost of the personal interview method, in annual hourly work, was 2.28 times higher than the computerized test. These findings may promote discussion regarding the cost benefit of the personal interview as an admission method for advanced academic studies in healthcare professions. © 2016 John Wiley & Sons Australia, Ltd.
Wang, Ling; Muralikrishnan, Bala; Rachakonda, Prem; Sawyer, Daniel
2017-01-01
Terrestrial laser scanners (TLS) are increasingly used in large-scale manufacturing and assembly where required measurement uncertainties are on the order of few tenths of a millimeter or smaller. In order to meet these stringent requirements, systematic errors within a TLS are compensated in-situ through self-calibration. In the Network method of self-calibration, numerous targets distributed in the work-volume are measured from multiple locations with the TLS to determine parameters of the TLS error model. In this paper, we propose two new self-calibration methods, the Two-face method and the Length-consistency method. The Length-consistency method is proposed as a more efficient way of realizing the Network method where the length between any pair of targets from multiple TLS positions are compared to determine TLS model parameters. The Two-face method is a two-step process. In the first step, many model parameters are determined directly from the difference between front-face and back-face measurements of targets distributed in the work volume. In the second step, all remaining model parameters are determined through the Length-consistency method. We compare the Two-face method, the Length-consistency method, and the Network method in terms of the uncertainties in the model parameters, and demonstrate the validity of our techniques using a calibrated scale bar and front-face back-face target measurements. The clear advantage of these self-calibration methods is that a reference instrument or calibrated artifacts are not required, thus significantly lowering the cost involved in the calibration process. PMID:28890607
Portable brine evaporator unit, process, and system
Hart, Paul John; Miller, Bruce G.; Wincek, Ronald T.; Decker, Glenn E.; Johnson, David K.
2009-04-07
The present invention discloses a comprehensive, efficient, and cost effective portable evaporator unit, method, and system for the treatment of brine. The evaporator unit, method, and system require a pretreatment process that removes heavy metals, crude oil, and other contaminates in preparation for the evaporator unit. The pretreatment and the evaporator unit, method, and system process metals and brine at the site where they are generated (the well site). Thus, saving significant money to producers who can avoid present and future increases in transportation costs.
Assessment of Occupational Health and Safety for a Gas Meter Manufacturing Plant
NASA Astrophysics Data System (ADS)
Korkmaz, Ece; Iskender, Gulen; Germirli Babuna, Fatos
2016-10-01
This study investigates the occupational health and safety for a gas meter manufacturing plant. The risk assessment and management study is applied to plastic injection and mounting departments of the factory through quantitative Fine Kinney method and the effect of adopting 5S workplace organization procedure on risk assessment is examined. The risk assessment reveals that there are 17 risks involved; 14 grouped in high risk class (immediate improvement as required action); 2 in significant (measures to be taken as required action) and one in possible risk class (monitoring as required action). Among 14 high risks, 4 can be reduced by 83 % to be grouped under possible class when 5S is applied. One significant risk is observed to be lowered by 78 % and considered as possible risk due to the application of 5S. As a result of either 67 or 50 % reductions in 7 high risks, these risks are converted to be members of significant risk group after 5S implications.
Hand-eye calibration for rigid laparoscopes using an invariant point.
Thompson, Stephen; Stoyanov, Danail; Schneider, Crispin; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J
2016-06-01
Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet it can be difficult due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but one current challenge is in accurate "hand-eye" calibration, which determines the position and orientation of the laparoscope camera relative to the tracking markers. In this paper, we propose a simple and clinically feasible calibration method based on a single invariant point. The method requires no additional hardware, can be constructed by theatre staff during surgical setup, requires minimal image processing and can be visualised in real time. Real-time visualisation allows the surgical team to assess the calibration accuracy before use in surgery. In addition, in the laboratory, we have developed a laparoscope with an electromagnetic tracking sensor attached to the camera end and an optical tracking marker attached to the distal end. This enables a comparison of tracking performance. We have evaluated our method in the laboratory and compared it to two widely used methods, "Tsai's method" and "direct" calibration. The new method is of comparable accuracy to existing methods, and we show RMS projected error due to calibration of 1.95 mm for optical tracking and 0.85 mm for EM tracking, versus 4.13 and 1.00 mm respectively, using existing methods. The new method has also been shown to be workable under sterile conditions in the operating room. We have proposed a new method of hand-eye calibration, based on a single invariant point. Initial experience has shown that the method provides visual feedback, satisfactory accuracy and can be performed during surgery. We also show that an EM sensor placed near the camera would provide significantly improved image overlay accuracy.
Analytics-Driven Lossless Data Compression for Rapid In-situ Indexing, Storing, and Querying
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkins, John; Arkatkar, Isha; Lakshminarasimhan, Sriram
2013-01-01
The analysis of scientific simulations is highly data-intensive and is becoming an increasingly important challenge. Peta-scale data sets require the use of light-weight query-driven analysis methods, as opposed to heavy-weight schemes that optimize for speed at the expense of size. This paper is an attempt in the direction of query processing over losslessly compressed scientific data. We propose a co-designed double-precision compression and indexing methodology for range queries by performing unique-value-based binning on the most significant bytes of double precision data (sign, exponent, and most significant mantissa bits), and inverting the resulting metadata to produce an inverted index over amore » reduced data representation. Without the inverted index, our method matches or improves compression ratios over both general-purpose and floating-point compression utilities. The inverted index is light-weight, and the overall storage requirement for both reduced column and index is less than 135%, whereas existing DBMS technologies can require 200-400%. As a proof-of-concept, we evaluate univariate range queries that additionally return column values, a critical component of data analytics, against state-of-the-art bitmap indexing technology, showing multi-fold query performance improvements.« less
NASA Astrophysics Data System (ADS)
Eliot, Michael H.
Students with learning disabilities (SWLDs) need to attain academic rigor to graduate from high school and college, as well as achieve success in life. Constructivist theories suggest that guided inquiry may provide the impetus for their success, yet little research has been done to support this premise. This study was designed to fill that gap. This quasi-experimental study compared didactic and guided inquiry-based teaching of science concepts to secondary SWLDs in SDC science classes. The study examined 38 students in four classes at two diverse, urban high schools. Participants were taught two science concepts using both teaching methods and posttested after each using paper-and-pencil tests and performance tasks. Data were compared to determine increases in conceptual understanding by teaching method, order of teaching method, and exposure one or both teaching methods. A survey examined participants' perceived self-efficacy under each method. Also, qualitative comparison of the two test formats examined appropriate use with SWLDs. Results showed significantly higher scores after the guided inquiry method on concept of volume, suggesting that guided inquiry does improve conceptual understanding over didactic instruction in some cases. Didactic teaching followed by guided inquiry resulted in higher scores than the reverse order, indicating that SWLDs may require direct instruction in basic facts and procedures related to a topic prior to engaging in guided inquiry. Also application of both teaching methods resulted in significantly higher scores than a single method on the concept of density, suggesting that SWLDs may require more in depth instruction found using both methods. No differences in perceived self-efficacy were shown. Qualitative analysis both assessments and participants' behaviors during testing support the use of performance tasks over paper-and-pencil tests with SWLDs. Implications for education include the use of guided inquiry to increase SWLDs conceptual understanding and process skills, while improving motivation and participation through hands-on learning. In addition, teachers may use performance tasks to better assess students' thought process, problem solving skills, and conceptual understanding. However, constructivist teaching methods require extra training, pedagogical skills, subject matter knowledge, physical resources, and support from all stakeholders.
Analysis of high-throughput biological data using their rank values.
Dembélé, Doulaye
2018-01-01
High-throughput biological technologies are routinely used to generate gene expression profiling or cytogenetics data. To achieve high performance, methods available in the literature become more specialized and often require high computational resources. Here, we propose a new versatile method based on the data-ordering rank values. We use linear algebra, the Perron-Frobenius theorem and also extend a method presented earlier for searching differentially expressed genes for the detection of recurrent copy number aberration. A result derived from the proposed method is a one-sample Student's t-test based on rank values. The proposed method is to our knowledge the only that applies to gene expression profiling and to cytogenetics data sets. This new method is fast, deterministic, and requires a low computational load. Probabilities are associated with genes to allow a statistically significant subset selection in the data set. Stability scores are also introduced as quality parameters. The performance and comparative analyses were carried out using real data sets. The proposed method can be accessed through an R package available from the CRAN (Comprehensive R Archive Network) website: https://cran.r-project.org/web/packages/fcros .
Sound field separation with sound pressure and particle velocity measurements.
Fernandez-Grande, Efren; Jacobsen, Finn; Leclère, Quentin
2012-12-01
In conventional near-field acoustic holography (NAH) it is not possible to distinguish between sound from the two sides of the array, thus, it is a requirement that all the sources are confined to only one side and radiate into a free field. When this requirement cannot be fulfilled, sound field separation techniques make it possible to distinguish between outgoing and incoming waves from the two sides, and thus NAH can be applied. In this paper, a separation method based on the measurement of the particle velocity in two layers and another method based on the measurement of the pressure and the velocity in a single layer are proposed. The two methods use an equivalent source formulation with separate transfer matrices for the outgoing and incoming waves, so that the sound from the two sides of the array can be modeled independently. A weighting scheme is proposed to account for the distance between the equivalent sources and measurement surfaces and for the difference in magnitude between pressure and velocity. Experimental and numerical studies have been conducted to examine the methods. The double layer velocity method seems to be more robust to noise and flanking sound than the combined pressure-velocity method, although it requires an additional measurement surface. On the whole, the separation methods can be useful when the disturbance of the incoming field is significant. Otherwise the direct reconstruction is more accurate and straightforward.
CATALYTIC ENZYME-BASED METHODS FOR WATER TREATMENT AND WATER DISTRIBUTION SYSTEM DECONTAMINATION
Current chemistry-based decontaminants for chemical or biological warfare agents and related toxic materials are caustic and have the potential for causing material and environmental damage. In addition, most are bulk liquids that require significant logistics and storage capabil...
ANALYTICAL METHODS FOR WATER DISINFECTION BY-PRODUCTS IN FOODS AND BEVERAGES
The determination of exposure to drinking water disinfection byproducts (DBPs) requires an understanding of how drinking waters come into contact with the human through multiple pathways. The most significant pathway is the ingestion of drinking water. However, ingestion can oc...
Goldstein, S J; Hensley, C A; Armenta, C E; Peters, R J
1997-03-01
Recent developments in extraction chromatography have simplified the separation of americium from complex matrices in preparation for alpha-spectroscopy relative to traditional methods. Here we present results of procedures developed/adapted for water, air, and bioassay samples with less than 1 g of inorganic residue. Prior analytical methods required the use of a complex, multistage procedure for separation of americium from these matrices. The newer, simplified procedure requires only a single 2 mL extraction chromatographic separation for isolation of Am and lanthanides from other components of the sample. This method has been implemented on an extensive variety of "real" environmental and bioassay samples from the Los Alamos area, and consistently reliable and accurate results with appropriate detection limits have been obtained. The new method increases analytical throughput by a factor of approximately 2 and decreases environmental hazards from acid and mixed-waste generation relative to the prior technique. Analytical accuracy, reproducibility, and reliability are also significantly improved over the more complex and laborious method used previously.
Cooley, R.L.; Hill, M.C.
1992-01-01
Three methods of solving nonlinear least-squares problems were compared for robustness and efficiency using a series of hypothetical and field problems. A modified Gauss-Newton/full Newton hybrid method (MGN/FN) and an analogous method for which part of the Hessian matrix was replaced by a quasi-Newton approximation (MGN/QN) solved some of the problems with appreciably fewer iterations than required using only a modified Gauss-Newton (MGN) method. In these problems, model nonlinearity and a large variance for the observed data apparently caused MGN to converge more slowly than MGN/FN or MGN/QN after the sum of squared errors had almost stabilized. Other problems were solved as efficiently with MGN as with MGN/FN or MGN/QN. Because MGN/FN can require significantly more computer time per iteration and more computer storage for transient problems, it is less attractive for a general purpose algorithm than MGN/QN.
Advanced Testing Method for Ground Thermal Conductivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Xiaobing; Clemenzi, Rick; Liu, Su
A new method is developed that can quickly and more accurately determine the effective ground thermal conductivity (GTC) based on thermal response test (TRT) results. Ground thermal conductivity is an important parameter for sizing ground heat exchangers (GHEXs) used by geothermal heat pump systems. The conventional GTC test method usually requires a TRT for 48 hours with a very stable electric power supply throughout the entire test. In contrast, the new method reduces the required test time by 40%–60% or more, and it can determine GTC even with an unstable or intermittent power supply. Consequently, it can significantly reduce themore » cost of GTC testing and increase its use, which will enable optimal design of geothermal heat pump systems. Further, this new method provides more information about the thermal properties of the GHEX and the ground than previous techniques. It can verify the installation quality of GHEXs and has the potential, if developed, to characterize the heterogeneous thermal properties of the ground formation surrounding the GHEXs.« less
Robust binarization of degraded document images using heuristics
NASA Astrophysics Data System (ADS)
Parker, Jon; Frieder, Ophir; Frieder, Gideon
2013-12-01
Historically significant documents are often discovered with defects that make them difficult to read and analyze. This fact is particularly troublesome if the defects prevent software from performing an automated analysis. Image enhancement methods are used to remove or minimize document defects, improve software performance, and generally make images more legible. We describe an automated, image enhancement method that is input page independent and requires no training data. The approach applies to color or greyscale images with hand written script, typewritten text, images, and mixtures thereof. We evaluated the image enhancement method against the test images provided by the 2011 Document Image Binarization Contest (DIBCO). Our method outperforms all 2011 DIBCO entrants in terms of average F1 measure - doing so with a significantly lower variance than top contest entrants. The capability of the proposed method is also illustrated using select images from a collection of historic documents stored at Yad Vashem Holocaust Memorial in Israel.
Testing prediction methods: Earthquake clustering versus the Poisson model
Michael, A.J.
1997-01-01
Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.
Extended polarization in 3rd order SCC-DFTB from chemical potential equilization
Kaminski, Steve; Giese, Timothy J.; Gaus, Michael; York, Darrin M.; Elstner, Marcus
2012-01-01
In this work we augment the approximate density functional method SCC-DFTB (DFTB3) with the chemical potential equilization (CPE) approach in order to improve the performance for molecular electronic polarizabilities. The CPE method, originally implemented for NDDO type methods by Giese and York, has been shown to emend minimal basis methods wrt response properties significantly, and has been applied to SCC-DFTB recently. CPE allows to overcome this inherent limitation of minimal basis methods by supplying an additional response density. The systematic underestimation is thereby corrected quantitatively without the need to extend the atomic orbital basis, i.e. without increasing the overall computational cost significantly. Especially the dependency of polarizability as a function of molecular charge state was significantly improved from the CPE extension of DFTB3. The empirical parameters introduced by the CPE approach were optimized for 172 organic molecules in order to match the results from density functional methods (DFT) methods using large basis sets. However, the first order derivatives of molecular polarizabilities, as e.g. required to compute Raman activities, are not improved by the current CPE implementation, i.e. Raman spectra are not improved. PMID:22894819
Han, Xu; Suo, Shiteng; Sun, Yawen; Zu, Jinyan; Qu, Jianxun; Zhou, Yan; Chen, Zengai; Xu, Jianrong
2017-03-01
To compare four methods of region-of-interest (ROI) placement for apparent diffusion coefficient (ADC) measurements in distinguishing low-grade gliomas (LGGs) from high-grade gliomas (HGGs). Two independent readers measured ADC parameters using four ROI methods (single-slice [single-round, five-round and freehand] and whole-volume) on 43 patients (20 LGGs, 23 HGGs) who had undergone 3.0 Tesla diffusion-weighted imaging and time required for each method of ADC measurements was recorded. Intraclass correlation coefficients (ICCs) were used to assess interobserver variability of ADC measurements. Mean and minimum ADC values and time required were compared using paired Student's t-tests. All ADC parameters (mean/minimum ADC values of three single-slice methods, mean/minimum/standard deviation/skewness/kurtosis/the10 th and 25 th percentiles/median/maximum of whole-volume method) were correlated with tumor grade (low versus high) by unpaired Student's t-tests. Discriminative ability was determined by receiver operating characteristic curves. All ADC measurements except minimum, skewness, and kurtosis of whole-volume ROI differed significantly between LGGs and HGGs (all P < 0.05). Mean ADC value of single-round ROI had the highest effect size (0.72) and the greatest areas under the curve (0.872). Three single-slice methods had good to excellent ICCs (0.67-0.89) and the whole-volume method fair to excellent ICCs (0.32-0.96). Minimum ADC values differed significantly between whole-volume and single-round ROI (P = 0.003) and, between whole-volume and five-round ROI (P = 0.001). The whole-volume method took significantly longer than all single-slice methods (all P < 0.001). ADC measurements are influenced by ROI determination methods. Whole-volume histogram analysis did not yield better results than single-slice methods and took longer. Mean ADC value derived from single-round ROI is the most optimal parameter for differentiating LGGs from HGGs. 3 J. Magn. Reson. Imaging 2017;45:722-730. © 2016 International Society for Magnetic Resonance in Medicine.
Grajek, Zbysław W; Dadan, Jacek; Ładny, Jerzy R; Opolski, Marcin
2015-01-01
The need to obtain successful surgical hemostasis had a significant impact on the development of electrosurgery. Innovative technical solutions necessitate the continuous training of surgeons in the use of more modern technologies. The diversity of solutions is also associated with the need to adapt the methods for obtaining hemostasis to the type of operation. Each time, the introduction of new technologies requires a critical evaluation of the results of surgical treatment. The most important measure of quality in thyroid surgery is the presence of chronic complications, such as the recurrent laryngeal nerve palsy and parathyroid insufficiency. Transient disorders also have a significant impact on the patient's comfort and quality of life. The report is preliminary in nature and it requires further investigation. The aim of the study was to evaluate the effect of three methods for obtaining hemostasis on the occurrence of hypoparathyroidism, recurrent laryngeal nerve palsy, bleeding and the surgical site infection after thyroid surgery. A retrospective analysis included patients who underwent thyroidectomy (n=654). Three methods of hemostasis were used. The first group (n=339) had blood vessels tied off. In the second (n=192) bipolar electrocoagulation was used and in the third one (n=123) bipolar electrocoagulation with integrated cutting mechanism. The transient hypoparathyroidism was found in 1.4% patients in the first group, 8.3% in the second and 27.6% in the third one. Chronic hypoparathyroidism was found in 0.29% in the first group, 0% in the second group and 2.4% in the third group. Significantly statistical differences were found in the incidence of transient hypoparathyroidism. Significant statistical differences were found in incidences of transient hypoparthyroidism in the group where bipolar electrosurgery was used.
Yang, Y; Kapalavavi, B; Gujjar, L; Hadrous, S; Marple, R; Gamsky, C
2012-10-01
Several high-temperature liquid chromatography (HTLC) and subcritical water chromatography (SBWC) methods have been successfully developed in this study for separation and analysis of preservatives contained in Olay skincare creams. Efficient separation and quantitative analysis of preservatives have been achieved on four commercially available ZirChrom and Waters XBridge columns at temperatures ranging from 100 to 200°C. The quantification results obtained by both HTLC and SBWC methods developed for preservatives analysis are accurate and reproducible. A large number of replicate HTLC and SBWC runs also indicate no significant system building-up or interference for skincare cream analysis. Compared with traditional HPLC separation carried out at ambient temperature, the HTLC methods can save up to 90% methanol required in the HPLC mobile phase. However, the SBWC methods developed in this project completely eliminated the use of toxic organic solvents required in the HPLC mobile phase, thus saving a significant amount of money and making the environment greener. Although both homemade and commercial systems can accomplish SBWC separations, the SBWC methods using the commercial system for preservative analysis are recommended for industrial applications because they can be directly applied in industrial plant settings. © 2012 The Authors ICS © 2012 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Influence of Installation Errors On the Output Data of the Piezoelectric Vibrations Transducers
NASA Astrophysics Data System (ADS)
Kozuch, Barbara; Chelmecki, Jaroslaw; Tatara, Tadeusz
2017-10-01
The paper examines an influence of installation errors of the piezoelectric vibrations transducers on the output data. PCB Piezotronics piezoelectric accelerometers were used to perform calibrations by comparison. The measurements were performed with TMS 9155 Calibration Workstation version 5.4.0 at frequency in the range of 5Hz - 2000Hz. Accelerometers were fixed on the calibration station in a so-called back-to-back configuration in accordance with the applicable international standard - ISO 16063-21: Methods for the calibration of vibration and shock transducers - Part 21: Vibration calibration by comparison to a reference transducer. The first accelerometer was calibrated by suitable methods with traceability to a primary reference transducer. Each subsequent calibration was performed when changing one setting in relation to the original calibration. The alterations were related to negligence and failures in relation to the above-mentioned standards and operating guidelines - e.g. the sensor was not tightened or appropriate substance was not placed. Also, there was modified the method of connection which was in the standards requirements. Different kind of wax, light oil, grease and other assembly methods were used. The aim of the study was to verify the significance of standards requirements and to estimate of their validity. The authors also wanted to highlight the most significant calibration errors. Moreover, relation between various appropriate methods of the connection was demonstrated.
Robust diffraction correction method for high-frequency ultrasonic tissue characterization
NASA Astrophysics Data System (ADS)
Raju, Balasundar
2004-05-01
The computation of quantitative ultrasonic parameters such as the attenuation or backscatter coefficient requires compensation for diffraction effects. In this work a simple and accurate diffraction correction method for skin characterization requiring only a single focal zone is developed. The advantage of this method is that the transducer need not be mechanically repositioned to collect data from several focal zones, thereby reducing the time of imaging and preventing motion artifacts. Data were first collected under controlled conditions from skin of volunteers using a high-frequency system (center frequency=33 MHz, BW=28 MHz) at 19 focal zones through axial translation. Using these data, mean backscatter power spectra were computed as a function of the distance between the transducer and the tissue, which then served as empirical diffraction correction curves for subsequent data. The method was demonstrated on patients patch-tested for contact dermatitis. The computed attenuation coefficient slope was significantly (p<0.05) lower at the affected site (0.13+/-0.02 dB/mm/MHz) compared to nearby normal skin (0.2+/-0.05 dB/mm/MHz). The mean backscatter level was also significantly lower at the affected site (6.7+/-2.1 in arbitrary units) compared to normal skin (11.3+/-3.2). These results show diffraction corrected ultrasonic parameters can differentiate normal from affected skin tissues.
Heuristic Modeling for TRMM Lifetime Predictions
NASA Technical Reports Server (NTRS)
Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.
1996-01-01
Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.
Determining significant material properties: A discovery approach
NASA Technical Reports Server (NTRS)
Karplus, Alan K.
1992-01-01
The following is a laboratory experiment designed to further understanding of materials science. The experiment itself can be informative for persons of any age past elementary school, and even for some in elementary school. The preparation of the plastic samples is readily accomplished by persons with resonable dexterity in the cutting of paper designs. The completion of the statistical Design of Experiments, which uses Yates' Method, requires basic math (addition and subtraction). Interpretive work requires plotting of data and making observations. Knowledge of statistical methods would be helpful. The purpose of this experiment is to acquaint students with the seven classes of recyclable plastics, and provide hands-on learning about the response of these plastics to mechanical tensile loading.
Basic materials and structures aspects for hypersonic transport vehicles (HTV)
NASA Astrophysics Data System (ADS)
Steinheil, E.; Uhse, W.
A Mach 5 transport design is used to illustrate structural concepts and criteria for materials selections and also key technologies that must be followed in the areas of computational methods, materials and construction methods. Aside from the primary criteria of low weight, low costs, and conceivable risks, a number of additional requirements must be met, including stiffness and strength, corrosion resistance, durability, and a construction adequate for inspection, maintenance and repair. Current aircraft construction requirements are significantly extended for hypersonic vehicles. Additional consideration is given to long-duration temperature resistance of the airframe structure, the integration of large-volume cryogenic fuel tanks, computational tools, structural design, polymer matrix composites, and advanced manufacturing technologies.
Cha, Dong Ik; Lee, Min Woo; Kim, Ah Yeong; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga
2017-11-01
Background A major drawback of conventional manual image fusion is that the process may be complex, especially for less-experienced operators. Recently, two automatic image fusion techniques called Positioning and Sweeping auto-registration have been developed. Purpose To compare the accuracy and required time for image fusion of real-time ultrasonography (US) and computed tomography (CT) images between Positioning and Sweeping auto-registration. Material and Methods Eighteen consecutive patients referred for planning US for radiofrequency ablation or biopsy for focal hepatic lesions were enrolled. Image fusion using both auto-registration methods was performed for each patient. Registration error, time required for image fusion, and number of point locks used were compared using the Wilcoxon signed rank test. Results Image fusion was successful in all patients. Positioning auto-registration was significantly faster than Sweeping auto-registration for both initial (median, 11 s [range, 3-16 s] vs. 32 s [range, 21-38 s]; P < 0.001] and complete (median, 34.0 s [range, 26-66 s] vs. 47.5 s [range, 32-90]; P = 0.001] image fusion. Registration error of Positioning auto-registration was significantly higher for initial image fusion (median, 38.8 mm [range, 16.0-84.6 mm] vs. 18.2 mm [6.7-73.4 mm]; P = 0.029), but not for complete image fusion (median, 4.75 mm [range, 1.7-9.9 mm] vs. 5.8 mm [range, 2.0-13.0 mm]; P = 0.338]. Number of point locks required to refine the initially fused images was significantly higher with Positioning auto-registration (median, 2 [range, 2-3] vs. 1 [range, 1-2]; P = 0.012]. Conclusion Positioning auto-registration offers faster image fusion between real-time US and pre-procedural CT images than Sweeping auto-registration. The final registration error is similar between the two methods.
Belci, D; Kos, M; Zoricić, D; Kuharić, L; Slivar, A; Begić-Razem, E; Grdinić, I
2007-06-01
The aim of this study was to evaluate the advantages of the Misgav Ladach surgical technique compared to traditional cesarean section. A prospective randomized trial of 111 women undergoing cesarean section was carried out in the Pula General Hospital. Forty-nine operations were performed using the Pfannenstiel method of cesarean section, 55 by the Misgav Ladach method and 7 by lower midline laparotomy. It was proved that the cases where the Misgav Ladach method was implemented, compared to the Pfannenstiel method, showed a significantly shorter delivery/extraction and operative time (P=0.0009), the incision pain on the second postoperative day was significantly lower (0.021), we recorded a quicker stand up and walking time (P=0.013), significantly fewer analgesic injections and a shorter duration of analgesia were required (P=0.0009) and the bowel function was restored to normal sooner (P=0.001). The Misgav Ladach method of cesarean section has advantages over the Pfannenstiel method in so far as it is significantly quicker to perform, with diminished postoperative pain and less use of postoperative analgesics. The recovery of physiologic function is faster. No differences were found in intraoperative bleeding, maternal morbidity, scar appearance, uterus postoperative involution and the assessment of the inflammation response to the operative technique.
Maneuver Planning for Conjunction Risk Mitigation with Ground-track Control Requirements
NASA Technical Reports Server (NTRS)
McKinley, David
2008-01-01
The planning of conjunction Risk Mitigation Maneuvers (RMM) in the presence of ground-track control requirements is analyzed. Past RMM planning efforts on the Aqua, Aura, and Terra spacecraft have demonstrated that only small maneuvers are available when ground-track control requirements are maintained. Assuming small maneuvers, analytical expressions for the effect of a given maneuver on conjunction geometry are derived. The analytical expressions are used to generate a large trade space for initial RMM design. This trade space represents a significant improvement in initial maneuver planning over existing methods that employ high fidelity maneuver models and propagation.
Reinstein, A; Bayou, M E
1994-10-01
The Financial Accounting Standards Board (FASB) recently issued a new statement that requires all companies to change their methods of accounting for debt and equity securities. Rather than allowing organizations to use a historical cost approach in accounting for such financial instruments, FASB Statement No. 115 requires organizations to adopt a market value approach. The provisions of this statement will affect significantly organizations in the healthcare industry that have large investment portfolios.
Geometric multigrid for an implicit-time immersed boundary method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guy, Robert D.; Philip, Bobby; Griffith, Boyce E.
2014-10-12
The immersed boundary (IB) method is an approach to fluid-structure interaction that uses Lagrangian variables to describe the deformations and resulting forces of the structure and Eulerian variables to describe the motion and forces of the fluid. Explicit time stepping schemes for the IB method require solvers only for Eulerian equations, for which fast Cartesian grid solution methods are available. Such methods are relatively straightforward to develop and are widely used in practice but often require very small time steps to maintain stability. Implicit-time IB methods permit the stable use of large time steps, but efficient implementations of such methodsmore » require significantly more complex solvers that effectively treat both Lagrangian and Eulerian variables simultaneously. Moreover, several different approaches to solving the coupled Lagrangian-Eulerian equations have been proposed, but a complete understanding of this problem is still emerging. This paper presents a geometric multigrid method for an implicit-time discretization of the IB equations. This multigrid scheme uses a generalization of box relaxation that is shown to handle problems in which the physical stiffness of the structure is very large. Numerical examples are provided to illustrate the effectiveness and efficiency of the algorithms described herein. Finally, these tests show that using multigrid as a preconditioner for a Krylov method yields improvements in both robustness and efficiency as compared to using multigrid as a solver. They also demonstrate that with a time step 100–1000 times larger than that permitted by an explicit IB method, the multigrid-preconditioned implicit IB method is approximately 50–200 times more efficient than the explicit method.« less
Joint Optics Structures Experiment (JOSE)
NASA Technical Reports Server (NTRS)
Founds, David
1987-01-01
The objectives of the JOSE program is to develop, demonstrate, and evaluate active vibration suppression techniques for Directed Energy Weapons (DEW). DEW system performance is highly influenced by the line-of-sight (LOS) stability and in some cases by the wave front quality. The missions envisioned for DEW systems by the Strategic Defense Initiative require LOS stability and wave front quality to be significantly improved over any current demonstrated capability. The Active Control of Space Structures (ACOSS) program led to the development of a number of promising structural control techniques. DEW structures are vastly more complex than any structures controlled to date. They will be subject to disturbances with significantly higher magnitudes and wider bandwidths, while holding higher tolerances on allowable motions and deformations. Meeting the performance requirements of the JOSE program requires upgrading the ACOSS techniques to meet new more stringent requirements, the development of requisite sensors and acturators, improved control processors, highly accurate system identification methods, and the integration of hardware and methodologies into a successful demonstration.
Tsukasaki, Wakako; Maruyama, Jun-Ichi; Kitamoto, Katsuhiko
2014-01-01
Hyphal fusion is involved in the formation of an interconnected colony in filamentous fungi, and it is the first process in sexual/parasexual reproduction. However, it was difficult to evaluate hyphal fusion efficiency due to the low frequency in Aspergillus oryzae in spite of its industrial significance. Here, we established a method to quantitatively evaluate the hyphal fusion ability of A. oryzae with mixed culture of two different auxotrophic strains, where the ratio of heterokaryotic conidia growing without the auxotrophic requirements reflects the hyphal fusion efficiency. By employing this method, it was demonstrated that AoSO and AoFus3 are required for hyphal fusion, and that hyphal fusion efficiency of A. oryzae was increased by depleting nitrogen source, including large amounts of carbon source, and adjusting pH to 7.0.
A New Calibration Method for Commercial RGB-D Sensors
Darwish, Walid; Tang, Shenjun; Li, Wenbin; Chen, Wu
2017-01-01
Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter-level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges. PMID:28538695
Technique for Very High Order Nonlinear Simulation and Validation
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.
2001-01-01
Finding the sources of sound in large nonlinear fields via direct simulation currently requires excessive computational cost. This paper describes a simple technique for efficiently solving the multidimensional nonlinear Euler equations that significantly reduces this cost and demonstrates a useful approach for validating high order nonlinear methods. Up to 15th order accuracy in space and time methods were compared and it is shown that an algorithm with a fixed design accuracy approaches its maximal utility and then its usefulness exponentially decays unless higher accuracy is used. It is concluded that at least a 7th order method is required to efficiently propagate a harmonic wave using the nonlinear Euler equations to a distance of 5 wavelengths while maintaining an overall error tolerance that is low enough to capture both the mean flow and the acoustics.
NASA Technical Reports Server (NTRS)
Ashley, R. P. (Principal Investigator); Goetz, A. F. H.; Rowan, L. C.; Abrams, M. J.
1979-01-01
The author has identified the following significant results. LANDSAT images enhanced by the band-ratioing method can be used for reconnaissance alteration mapping in moderately heavily vegetated semiarid terrain as well as in sparsely vegetated to semiarid terrain where the technique was originally developed. Significant vegetation cover in a scene, however, requires the use of MSS ratios 4/5, 4/6, and 6/7 rather than 4/5, 5/6, and 6/7, and requires careful interpretation of the results. Supplemental information suitable to vegetation identification and cover estimates, such as standard LANDSAT false-color composites and low altitude aerial photographs of selected areas is desirable.
A stepwise, multi-objective, multi-variable parameter optimization method for the APEX model
USDA-ARS?s Scientific Manuscript database
Proper parameterization enables hydrological models to make reliable estimates of non-point source pollution for effective control measures. The automatic calibration of hydrologic models requires significant computational power limiting its application. The study objective was to develop and eval...
Xu, Jason; Minin, Vladimir N
2015-07-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.
Test Results for Entry Guidance Methods for Space Vehicles
NASA Technical Reports Server (NTRS)
Hanson, John M.; Jones, Robert E.
2004-01-01
There are a number of approaches to advanced guidance and control that have the potential for achieving the goals of significantly increasing reusable launch vehicle (or any space vehicle that enters an atmosphere) safety and reliability, and reducing the cost. This paper examines some approaches to entry guidance. An effort called Integration and Testing of Advanced Guidance and Control Technologies has recently completed a rigorous testing phase where these algorithms faced high-fidelity vehicle models and were required to perform a variety of representative tests. The algorithm developers spent substantial effort improving the algorithm performance in the testing. This paper lists the test cases used to demonstrate that the desired results are achieved, shows an automated test scoring method that greatly reduces the evaluation effort required, and displays results of the tests. Results show a significant improvement over previous guidance approaches. The two best-scoring algorithm approaches show roughly equivalent results and are ready to be applied to future vehicle concepts.
Test Results for Entry Guidance Methods for Reusable Launch Vehicles
NASA Technical Reports Server (NTRS)
Hanson, John M.; Jones, Robert E.
2003-01-01
There are a number of approaches to advanced guidance and control (AG&C) that have the potential for achieving the goals of significantly increasing reusable launch vehicle (RLV) safety and reliability, and reducing the cost. This paper examines some approaches to entry guidance. An effort called Integration and Testing of Advanced Guidance and Control Technologies (ITAGCT) has recently completed a rigorous testing phase where these algorithms faced high-fidelity vehicle models and were required to perform a variety of representative tests. The algorithm developers spent substantial effort improving the algorithm performance in the testing. This paper lists the test cases used to demonstrate that the desired results are achieved, shows an automated test scoring method that greatly reduces the evaluation effort required, and displays results of the tests. Results show a significant improvement over previous guidance approaches. The two best-scoring algorithm approaches show roughly equivalent results and are ready to be applied to future reusable vehicle concepts.
Xu, Jason; Minin, Vladimir N.
2016-01-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377
A Data-Centric Strategy for Modern Biobanking.
Quinlan, Philip R; Gardner, Stephen; Groves, Martin; Emes, Richard; Garibaldi, Jonathan
2015-01-01
Biobanking has been in existence for many decades and over that time has developed significantly. Biobanking originated from a need to collect, store and make available biological samples for a range of research purposes. It has changed as the understanding of biological processes has increased and new sample handling techniques have been developed to ensure samples were fit-for-purpose.As a result of these developments, modern biobanking is now facing two substantial new challenges. Firstly, new research methods such as next generation sequencing can generate datasets that are at an infinitely greater scale and resolution than previous methods. Secondly, as the understanding of diseases increases researchers require a far richer data set about the donors from which the sample originate.To retain a sample-centric strategy in a research environment that is increasingly dictated by data will place a biobank at a significant disadvantage and even result in the samples collected going unused. As a result biobanking is required to change strategic focus from a sample dominated perspective to a data-centric strategy.
Russell, Jeffrey A; Shave, Ruth M; Kruse, David W; Nevill, Alan M; Koutedakis, Yiannis; Wyon, Matthew A
2011-06-01
Female ballet dancers require extreme ankle motion to attain the demi-plié (weight-bearing full dorsiflexion [DF]) and en pointe (weight-bearing full plantar flexion [PF]) positions of ballet. However, techniques for assessing this amount of motion have not yet received sufficient scientific scrutiny. Therefore, the purpose of this study was to examine possible differences between weight-bearing goniometric and radiographic ankle range of motion measurements in female ballet dancers. Ankle range of motion in 8 experienced female ballet dancers was assessed by goniometry and 2 radiographic measurement methods. The latter were performed on 3 mediolateral x-rays, in demi-plié, neutral, and en pointe positions; one of them used the same landmarks as goniometry. DF values were not significantly different among the methods, but PF values were (P < .05). Not only was PF of the talocrural joint significantly less than the other 2 measurements (P < .001), PF from the goniometric method applied to the x-rays was significantly less than PF obtained from clinical goniometry (P < .05). These data provide insight into the extreme ankle and foot motion, particularly PF, required in female ballet dancers and suggest that goniometry may not be ideal for assessing ankle range of motion in these individuals. Therefore, further research is needed to standardize how DF and PF are measured in ballet dancers. Diagnostic, Level I.
A method to accelerate creation of plasma etch recipes using physics and Bayesian statistics
NASA Astrophysics Data System (ADS)
Chopra, Meghali J.; Verma, Rahul; Lane, Austin; Willson, C. G.; Bonnecaze, Roger T.
2017-03-01
Next generation semiconductor technologies like high density memory storage require precise 2D and 3D nanopatterns. Plasma etching processes are essential to achieving the nanoscale precision required for these structures. Current plasma process development methods rely primarily on iterative trial and error or factorial design of experiment (DOE) to define the plasma process space. Here we evaluate the efficacy of the software tool Recipe Optimization for Deposition and Etching (RODEo) against standard industry methods at determining the process parameters of a high density O2 plasma system with three case studies. In the first case study, we demonstrate that RODEo is able to predict etch rates more accurately than a regression model based on a full factorial design while using 40% fewer experiments. In the second case study, we demonstrate that RODEo performs significantly better than a full factorial DOE at identifying optimal process conditions to maximize anisotropy. In the third case study we experimentally show how RODEo maximizes etch rates while using half the experiments of a full factorial DOE method. With enhanced process predictions and more accurate maps of the process space, RODEo reduces the number of experiments required to develop and optimize plasma processes.
Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sudheer, C. D.; Krishnan, S.; Srinivasan, A.
Diffusion Monte Carlo is the most accurate widely used Quantum Monte Carlo method for the electronic structure of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency and avoid accumulation of systematic errors on parallel machines. The load balancing step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receivingmore » at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL showing up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that requires load balancing.« less
Analysis of food taints and off-flavours: a review.
Ridgway, K; Lalljie, S P D; Smith, R M
2010-02-01
Taints and off-flavours in foods are a major concern to the food industry. Identification of the compound(s) causing a taint or off-flavour in food and accurate quantification are critical in assessing the potential safety risks of a product or ingredient. Even when the tainting compound(s) are not at a level that would cause a safety concern, taints and off-flavours can have a significant impact on the quality and consumers' acceptability of products. The analysis of taints and off-flavour compounds presents an analytical challenge especially in an industrial laboratory environment because of the low levels, often complex matrices and potential for contamination from external laboratory sources. This review gives an outline of the origins of chemical taints and off-flavours and looks at the methods used for analysis and the merits and drawbacks of each technique. Extraction methods and instrumentation are covered along with possible future developments. Generic screening methods currently lack the sensitivity required to detect the low levels required for some tainting compounds and a more targeted approach is often required. This review highlights the need for a rapid but sensitive universal method of extraction for the unequivocal determination of tainting compounds in food.
Ultraviolet-C Irradiation: A Novel Pasteurization Method for Donor Human Milk
Christen, Lukas; Lai, Ching Tat; Hartmann, Ben; Hartmann, Peter E.; Geddes, Donna T.
2013-01-01
Background Holder pasteurization (milk held at 62.5°C for 30 minutes) is the standard treatment method for donor human milk. Although this method of pasteurization is able to inactivate most bacteria, it also inactivates important bioactive components. Therefore, the objective of this study was to investigate ultraviolet irradiation as an alternative treatment method for donor human milk. Methods Human milk samples were inoculated with five species of bacteria and then UV-C irradiated. Untreated and treated samples were analysed for bacterial content, bile salt stimulated lipase (BSSL) activity, alkaline phosphatase (ALP) activity, and fatty acid profile. Results All five species of bacteria reacted similarly to UV-C irradiation, with higher dosages being required with increasing concentrations of total solids in the human milk sample. The decimal reduction dosage was 289±17 and 945±164 J/l for total solids of 107 and 146 g/l, respectively. No significant changes in the fatty acid profile, BSSL activity or ALP activity were observed up to the dosage required for a 5-log10 reduction of the five species of bacteria. Conclusion UV-C irradiation is capable of reducing vegetative bacteria in human milk to the requirements of milk bank guidelines with no loss of BSSL and ALP activity and no change of FA. PMID:23840820
Efficient calculation of the polarizability: a simplified effective-energy technique
NASA Astrophysics Data System (ADS)
Berger, J. A.; Reining, L.; Sottile, F.
2012-09-01
In a recent publication [J.A. Berger, L. Reining, F. Sottile, Phys. Rev. B 82, 041103(R) (2010)] we introduced the effective-energy technique to calculate in an accurate and numerically efficient manner the GW self-energy as well as the polarizability, which is required to evaluate the screened Coulomb interaction W. In this work we show that the effective-energy technique can be used to further simplify the expression for the polarizability without a significant loss of accuracy. In contrast to standard sum-over-state methods where huge summations over empty states are required, our approach only requires summations over occupied states. The three simplest approximations we obtain for the polarizability are explicit functionals of an independent- or quasi-particle one-body reduced density matrix. We provide evidence of the numerical accuracy of this simplified effective-energy technique as well as an analysis of our method.
Scandurra, Isabella; Hägglund, Maria; Koch, Sabine
2008-01-01
A significant problem with current health information technologies is that they poorly support collaborative work of healthcare professionals, sometimes leading to a fragmentation of workflow and disruption of healthcare processes. This paper presents two homecare cases, both applying multi-disciplinary thematic seminars (MdTS) as a collaborative method for user needs elicitation and requirements specification. This study describes the MdTS application to elicit user needs from different perspectives to coincide with collaborative professions' work practices in two cases. Despite different objectives, the two cases validated that MdTS emphasized the "points of intersection" in cooperative work. Different user groups with similar, yet distinct needs reached a common understanding of the entire work process, agreed upon requirements and participated in the design of prototypes supporting cooperative work. MdTS was applicable in both exploratory and normative studies aiming to elicit the specific requirements in a cooperative environment.
Using the MDCT thick slab MinIP method for the follow-up of pulmonary emphysema.
Lan, Hai; Nishitani, Hiromu; Nishihara, Sadamitsu; Ueno, Junji; Takao, Shoichiro; Iwamoto, Seiji; Kawanaka, Takashi; Mahmut, Mawlan; Qingge, Si
2011-08-01
The purpose of this study was to evaluate the usefulness of thick slab minimum intensity projection (MinIP) as a follow-up method in patients with pulmonary emphysema. This method was used to determine the presence or absence of changes over time in the lung field based on multi-detector-row CT (MDCT) data. Among patients diagnosed with pulmonary emphysema who underwent 16-MDCT (slice thickness, 1 mm) twice at an interval of 6 months or more, 12 patients without changes in the lung field and 14 with clear changes in the lung field were selected as subjects. An image interpretation experiment was performed by five image interpreters. Pulmonary emphysema was followed up using two types of thick slab MinIP (thick slab MinIP 1 and 2) and multi-planar reformation (MPR), and the results of image interpretation were evaluated by receiver operating characteristic (ROC) analysis. In addition, the time required for image interpretation was compared among the three follow-up methods. The area under the ROC curve (Az) was 0.794 for thick slab MinIP 1, 0.778 for the thick slab MinIP 2, and 0.759 for MPR, showing no significant differences among the three methods. Individual differences in each item were significantly more marked for MPR than for thick slab MinIP. The time required for image interpretation was around 18 seconds for thick slab MinIP 1, 11 seconds for thick slab MinIP 2, and approximately 127 seconds for MPR, showing significant differences among the three methods. There were no significant differences in the results of image interpretation regarding the presence or absence of changes in the lung fields between thick slab MinIP and MPR. However, thick slab MinIP showed a shorter image interpretation time and smaller individual differences in the results among image interpreters than MPR, suggesting the usefulness of this method for determining the presence or absence of changes with time in the lung fields of patients with pulmonary emphysema.
Mueller, Sherry A; Anderson, James E; Kim, Byung R; Ball, James C
2009-04-01
Effective bacterial control in cooling-tower systems requires accurate and timely methods to count bacteria. Plate-count methods are difficult to implement on-site, because they are time- and labor-intensive and require sterile techniques. Several field-applicable methods (dipslides, Petrifilm, and adenosine triphosphate [ATP] bioluminescence) were compared with the plate count for two sample matrices--phosphate-buffered saline solution containing a pure culture of Pseudomonas fluorescens and cooling-tower water containing an undefined mixed bacterial culture. For the pure culture, (1) counts determined on nutrient agar and plate-count agar (PCA) media and expressed as colony-forming units (CFU) per milliliter were equivalent to those on R2A medium (p = 1.0 and p = 1.0, respectively); (2) Petrifilm counts were not significantly different from R2A plate counts (p = 0.99); (3) the dipslide counts were up to 2 log units higher than R2A plate counts, but this discrepancy was not statistically significant (p = 0.06); and (4) a discernable correlation (r2 = 0.67) existed between ATP readings and plate counts. For cooling-tower water samples (n = 62), (1) bacterial counts using R2A medium were higher (but not significant; p = 0.63) than nutrient agar and significantly higher than tryptone-glucose yeast extract (TGE; p = 0.03) and PCA (p < 0.001); (2) Petrifilm counts were significantly lower than nutrient agar or R2A (p = 0.02 and p < 0.001, respectively), but not statistically different from TGE, PCA, and dipslides (p = 0.55, p = 0.69, and p = 0.91, respectively); (3) the dipslide method yielded bacteria counts 1 to 3 log units lower than nutrient agar and R2A (p < 0.001), but was not significantly different from Petrifilm (p = 0.91), PCA (p = 1.00) or TGE (p = 0.07); (4) the differences between dipslides and the other methods became greater with a 6-day incubation time; and (5) the correlation between ATP readings and plate counts varied from system to system, was poor (r2 values ranged from < 0.01 to 0.47), and the ATP method was not sufficiently sensitive to measure counts below approximately 10(4) CFU/mL.
Assessing Backwards Integration as a Method of KBO Family Finding
NASA Astrophysics Data System (ADS)
Benfell, Nathan; Ragozzine, Darin
2018-04-01
The age of young asteroid collisional families can sometimes be determined by using backwards n-body integrations of the solar system. This method is not used for discovering young asteroid families and is limited by the unpredictable influence of the Yarkovsky effect on individual specific asteroids over time. Since these limitations are not as important for objects in the Kuiper belt, Marcus et al. 2011 suggested that backwards integration could be used to discover and characterize collisional families in the outer solar system. But various challenges present themselves when running precise and accurate 4+ Gyr integrations of Kuiper Belt objects. We have created simulated families of Kuiper Belt Objects with identical starting locations and velocity distributions, based on the Haumea Family. We then ran several long-term test integrations to observe the effect of various simulation parameters on integration results. These integrations were then used to investigate which parameters are of enough significance to require inclusion in the integration. Thereby we determined how to construct long-term integrations that both yield significant results and require manageable processing power. Additionally, we have tested the use of backwards integration as a method of discovery of potential young families in the Kuiper Belt.
Advanced Guidance and Control Methods for Reusable Launch Vehicles: Test Results
NASA Technical Reports Server (NTRS)
Hanson, John M.; Jones, Robert E.; Krupp, Don R.; Fogle, Frank R. (Technical Monitor)
2002-01-01
There are a number of approaches to advanced guidance and control (AG&C) that have the potential for achieving the goals of significantly increasing reusable launch vehicle (RLV) safety/reliability and reducing the cost. In this paper, we examine some of these methods and compare the results. We briefly introduce the various methods under test, list the test cases used to demonstrate that the desired results are achieved, show an automated test scoring method that greatly reduces the evaluation effort required, and display results of the tests. Results are shown for the algorithms that have entered testing so far.
Improvements to the kernel function method of steady, subsonic lifting surface theory
NASA Technical Reports Server (NTRS)
Medan, R. T.
1974-01-01
The application of a kernel function lifting surface method to three dimensional, thin wing theory is discussed. A technique for determining the influence functions is presented. The technique is shown to require fewer quadrature points, while still calculating the influence functions accurately enough to guarantee convergence with an increasing number of spanwise quadrature points. The method also treats control points on the wing leading and trailing edges. The report introduces and employs an aspect of the kernel function method which apparently has never been used before and which significantly enhances the efficiency of the kernel function approach.
Information Theory for Gabor Feature Selection for Face Recognition
NASA Astrophysics Data System (ADS)
Shen, Linlin; Bai, Li
2006-12-01
A discriminative and robust feature—kernel enhanced informative Gabor feature—is proposed in this paper for face recognition. Mutual information is applied to select a set of informative and nonredundant Gabor features, which are then further enhanced by kernel methods for recognition. Compared with one of the top performing methods in the 2004 Face Verification Competition (FVC2004), our methods demonstrate a clear advantage over existing methods in accuracy, computation efficiency, and memory cost. The proposed method has been fully tested on the FERET database using the FERET evaluation protocol. Significant improvements on three of the test data sets are observed. Compared with the classical Gabor wavelet-based approaches using a huge number of features, our method requires less than 4 milliseconds to retrieve a few hundreds of features. Due to the substantially reduced feature dimension, only 4 seconds are required to recognize 200 face images. The paper also unified different Gabor filter definitions and proposed a training sample generation algorithm to reduce the effects caused by unbalanced number of samples available in different classes.
Detecting cis-regulatory binding sites for cooperatively binding proteins
van Oeffelen, Liesbeth; Cornelis, Pierre; Van Delm, Wouter; De Ridder, Fedor; De Moor, Bart; Moreau, Yves
2008-01-01
Several methods are available to predict cis-regulatory modules in DNA based on position weight matrices. However, the performance of these methods generally depends on a number of additional parameters that cannot be derived from sequences and are difficult to estimate because they have no physical meaning. As the best way to detect cis-regulatory modules is the way in which the proteins recognize them, we developed a new scoring method that utilizes the underlying physical binding model. This method requires no additional parameter to account for multiple binding sites; and the only necessary parameters to model homotypic cooperative interactions are the distances between adjacent protein binding sites in basepairs, and the corresponding cooperative binding constants. The heterotypic cooperative binding model requires one more parameter per cooperatively binding protein, which is the concentration multiplied by the partition function of this protein. In a case study on the bacterial ferric uptake regulator, we show that our scoring method for homotypic cooperatively binding proteins significantly outperforms other PWM-based methods where biophysical cooperativity is not taken into account. PMID:18400778
Zeng, Xueqiang; Luo, Gang
2017-12-01
Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.
Selewski, David T.; Cornell, Timothy T.; Lombel, Rebecca M.; Blatt, Neal B.; Han, Yong Y.; Mottes, Theresa; Kommareddi, Mallika; Kershaw, David B.; Shanley, Thomas P.; Heung, Michael
2012-01-01
Purpose In pediatric intensive care unit (PICU) patients, fluid overload (FO) at initiation of continuous renal replacement therapy (CRRT) has been reported to be an independent risk factor for mortality. Previous studies have calculated FO based on daily fluid balance during ICU admission, which is labor intensive and error prone. We hypothesized that a weight-based definition of FO at CRRT initiation would correlate with the fluid balance method and prove predictive of outcome. Methods This is a retrospective single-center review of PICU patients requiring CRRT from July 2006 through February 2010 (n = 113). We compared the degree of FO at CRRT initiation using the standard fluid balance method versus methods based on patient weight changes assessed by both univariate and multivariate analyses. Results The degree of fluid overload at CRRT initiation was significantly greater in nonsurvivors, irrespective of which method was used. The univariate odds ratio for PICU mortality per 1% increase in FO was 1.056 [95% confidence interval (CI) 1.025, 1.087] by the fluid balance method, 1.044 (95% CI 1.019, 1.069) by the weight-based method using PICU admission weight, and 1.045 (95% CI 1.022, 1.07) by the weight-based method using hospital admission weight. On multivariate analyses, all three methods approached significance in predicting PICU survival. Conclusions Our findings suggest that weight-based definitions of FO are useful in defining FO at CRRT initiation and are associated with increased mortality in a broad PICU patient population. This study provides evidence for a more practical weight-based definition of FO that can be used at the bedside. PMID:21533569
The importance of the keyword-generation method in keyword mnemonics.
Campos, Alfredo; Amor, Angeles; González, María Angeles
2004-01-01
Keyword mnemonics is under certain conditions an effective approach for learning foreign-language vocabulary. It appears to be effective for words with high image vividness but not for words with low image vividness. In this study, two experiments were performed to assess the efficacy of a new keyword-generation procedure (peer generation). In Experiment 1, a sample of 363 high-school students was randomly into four groups. The subjects were required to learn L1 equivalents of a list of 16 Latin words (8 with high image vividness, 8 with low image vividness), using a) the rote method, or the keyword method with b) keywords and images generated and supplied by the experimenter, c) keywords and images generated by themselves, or d) keywords and images previously generated by peers (i.e., subjects with similar sociodemographic characteristics). Recall was tested immediately and one week later. For high-vivideness words, recall was significantly better in the keyword groups than the rote method group. For low-vividness words, learning method had no significant effect. Experiment 2 was basically identical, except that the word lists comprised 32 words (16 high-vividness, 16 low-vividness). In this experiment, the peer-generated-keyword group showed significantly better recall of high-vividness words than the rote method groups and the subject generated keyword group; again, however, learning method had no significant effect on recall of low-vividness words.
NASA Technical Reports Server (NTRS)
Vos, Gordon A.; Fink, Patrick; Ngo, Phong H.; Morency, Richard; Simon, Cory; Williams, Robert E.; Perez, Lance C.
2017-01-01
Space Human Factors and Habitability (SHFH) Element within the Human Research Program (HRP) and the Behavioral Health and Performance (BHP) Element are conducting research regarding Net Habitable Volume (NHV), the internal volume within a spacecraft or habitat that is available to crew for required activities, as well as layout and accommodations within the volume. NASA needs methods to unobtrusively collect NHV data without impacting crew time. Data required includes metrics such as location and orientation of crew, volume used to complete tasks, internal translation paths, flow of work, and task completion times. In less constrained environments methods exist yet many are obtrusive and require significant post-processing. ?Examplesused in terrestrial settings include infrared (IR) retro-reflective marker based motion capture, GPS sensor tracking, inertial tracking, and multi-camera methods ?Due to constraints of space operations many such methods are infeasible. Inertial tracking systems typically rely upon a gravity vector to normalize sensor readings,and traditional IR systems are large and require extensive calibration. ?However, multiple technologies have not been applied to space operations for these purposes. Two of these include: 3D Radio Frequency Identification Real-Time Localization Systems (3D RFID-RTLS) ?Depth imaging systems which allow for 3D motion capture and volumetric scanning (such as those using IR-depth cameras like the Microsoft Kinect or Light Detection and Ranging / Light-Radar systems, referred to as LIDAR)
Krishtop, Victor; Doronin, Ivan; Okishev, Konstantin
2012-11-05
Photon correlation spectroscopy is an effective method for measuring nanoparticle sizes and has several advantages over alternative methods. However, this method suffers from a disadvantage in that its measuring accuracy reduces in the presence of convective flows of fluid containing nanoparticles. In this paper, we propose a scheme based on attenuated total reflectance in order to reduce the influence of convection currents. The autocorrelation function for the light-scattering intensity was found for this case, and it was shown that this method afforded a significant decrease in the time required to measure the particle sizes and an increase in the measuring accuracy.
Odgaard, Eric C; Fowler, Robert L
2010-06-01
In 2005, the Journal of Consulting and Clinical Psychology (JCCP) became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest editorial effort to improve statistical reporting practices in any APA journal in at least a decade, in this article we investigate the efficacy of that change. All intervention studies published in JCCP in 2003, 2004, 2007, and 2008 were reviewed. Each article was coded for method of clinical significance, type of ES, and type of associated CI, broken down by statistical test (F, t, chi-square, r/R(2), and multivariate modeling). By 2008, clinical significance compliance was 75% (up from 31%), with 94% of studies reporting some measure of ES (reporting improved for individual statistical tests ranging from eta(2) = .05 to .17, with reasonable CIs). Reporting of CIs for ESs also improved, although only to 40%. Also, the vast majority of reported CIs used approximations, which become progressively less accurate for smaller sample sizes and larger ESs (cf. Algina & Kessleman, 2003). Changes are near asymptote for ESs and clinical significance, but CIs lag behind. As CIs for ESs are required for primary outcomes, we show how to compute CIs for the vast majority of ESs reported in JCCP, with an example of how to use CIs for ESs as a method to assess clinical significance.
Space Suit Joint Torque Measurement Method Validation
NASA Technical Reports Server (NTRS)
Valish, Dana; Eversley, Karina
2012-01-01
In 2009 and early 2010, a test method was developed and performed to quantify the torque required to manipulate joints in several existing operational and prototype space suits. This was done in an effort to develop joint torque requirements appropriate for a new Constellation Program space suit system. The same test method was levied on the Constellation space suit contractors to verify that their suit design met the requirements. However, because the original test was set up and conducted by a single test operator there was some question as to whether this method was repeatable enough to be considered a standard verification method for Constellation or other future development programs. In order to validate the method itself, a representative subset of the previous test was repeated, using the same information that would be available to space suit contractors, but set up and conducted by someone not familiar with the previous test. The resultant data was compared using graphical and statistical analysis; the results indicated a significant variance in values reported for a subset of the re-tested joints. Potential variables that could have affected the data were identified and a third round of testing was conducted in an attempt to eliminate and/or quantify the effects of these variables. The results of the third test effort will be used to determine whether or not the proposed joint torque methodology can be applied to future space suit development contracts.
Site-specific gene transfer into the rat spinal cord by photomechanical waves
NASA Astrophysics Data System (ADS)
Ando, Takahiro; Sato, Shunichi; Toyooka, Terushige; Uozumi, Yoichi; Nawashiro, Hiroshi; Ashida, Hiroshi; Obara, Minoru
2011-10-01
Nonviral, site-specific gene delivery to deep tissue is required for gene therapy of a spinal cord injury. However, an efficient method satisfying these requirements has not been established. This study demonstrates efficient and targeted gene transfer into the spinal cord by using photomechanical waves (PMWs), which were generated by irradiating a black laser absorbing rubber with 532-nm nanosecond Nd:YAG laser pulses. After a solution of plasmid DNA coding for enhanced green fluorescent protein (EGFP) or luciferase was intraparenchymally injected into the spinal cord, PMWs were applied to the target site. In the PMW application group, we observed significant EGFP gene expression in the white matter and remarkably high luciferase activity only in the spinal cord segment exposed to the PMWs. We also assessed hind limb movements 24 h after the application of PMWs based on the Basso-Beattie-Bresnahan (BBB) score to evaluate the noninvasiveness of this method. Locomotor evaluation showed no significant decrease in BBB score under optimum laser irradiation conditions. These findings demonstrated that exogenous genes can be efficiently and site-selectively delivered into the spinal cord by applying PMWs without significant locomotive damage.
Efficient pairwise RNA structure prediction using probabilistic alignment constraints in Dynalign
2007-01-01
Background Joint alignment and secondary structure prediction of two RNA sequences can significantly improve the accuracy of the structural predictions. Methods addressing this problem, however, are forced to employ constraints that reduce computation by restricting the alignments and/or structures (i.e. folds) that are permissible. In this paper, a new methodology is presented for the purpose of establishing alignment constraints based on nucleotide alignment and insertion posterior probabilities. Using a hidden Markov model, posterior probabilities of alignment and insertion are computed for all possible pairings of nucleotide positions from the two sequences. These alignment and insertion posterior probabilities are additively combined to obtain probabilities of co-incidence for nucleotide position pairs. A suitable alignment constraint is obtained by thresholding the co-incidence probabilities. The constraint is integrated with Dynalign, a free energy minimization algorithm for joint alignment and secondary structure prediction. The resulting method is benchmarked against the previous version of Dynalign and against other programs for pairwise RNA structure prediction. Results The proposed technique eliminates manual parameter selection in Dynalign and provides significant computational time savings in comparison to prior constraints in Dynalign while simultaneously providing a small improvement in the structural prediction accuracy. Savings are also realized in memory. In experiments over a 5S RNA dataset with average sequence length of approximately 120 nucleotides, the method reduces computation by a factor of 2. The method performs favorably in comparison to other programs for pairwise RNA structure prediction: yielding better accuracy, on average, and requiring significantly lesser computational resources. Conclusion Probabilistic analysis can be utilized in order to automate the determination of alignment constraints for pairwise RNA structure prediction methods in a principled fashion. These constraints can reduce the computational and memory requirements of these methods while maintaining or improving their accuracy of structural prediction. This extends the practical reach of these methods to longer length sequences. The revised Dynalign code is freely available for download. PMID:17445273
O'Gorman, Thomas W
2018-05-01
In the last decade, it has been shown that an adaptive testing method could be used, along with the Robbins-Monro search procedure, to obtain confidence intervals that are often narrower than traditional confidence intervals. However, these confidence interval limits require a great deal of computation and some familiarity with stochastic search methods. We propose a method for estimating the limits of confidence intervals that uses only a few tests of significance. We compare these limits to those obtained by a lengthy Robbins-Monro stochastic search and find that the proposed method is nearly as accurate as the Robbins-Monro search. Adaptive confidence intervals that are produced by the proposed method are often narrower than traditional confidence intervals when the distributions are long-tailed, skewed, or bimodal. Moreover, the proposed method of estimating confidence interval limits is easy to understand, because it is based solely on the p-values from a few tests of significance.
Tancheva, D.; Arabadziev, J.; Gergov, G.; Lachev, N.; Todorova, S.; Hristova, A.
2005-01-01
Summary Severe burn injuries give rise to an extreme state of physiological stress. No other trauma results in such an accelerated rate of tissue catabolism, loss of lean body mass, and depletion of energy and protein reserves. A heightened attention to energy needs is essential, and the significance of adequate nutritional support in the complex management of patients with major burns is very important. The purpose of this study is to compare the results obtained by three of the most popular methods of estimating energy requirements in severely burned adult patients with the measurements of resting energy (REE) expenditure by indirect calorimetry (IC). A prospective study was carried out of 20 patients (male/female ratio, 17/3; mean age, 37.83 ± 10.86 yr), without accompanying morbidities, with burn injuries covering a mean body surface area of 34.27 ± 11.55% and a mean abbreviated burn severity index of 7.44 ± 1.58. During the first 30 days after trauma, the energy requirements were estimated using the Curreri, Long, and Toronto formulas. Twice weekly measurements of REE by IC were obtained. It was found that the Curreri and Long formulas overestimated the energy requirements in severely burned patients, as found by other investigators. However, no significant difference was found between the daily energy requirements calculated by the Toronto formula and the measured REE values by IC. It is concluded that the Toronto formula can be used as an alternative method for estimating the energy requirements of patients with major burns in cases where IC is not available or not applicable. PMID:21990973
Liu, Jianhua; Yu, Xiaojun; Xia, Meng; Cai, Hong; Cheng, Guixue; Wu, Lina; Li, Qiang; Zhang, Ying; Sheng, Mengyuan; Liu, Yong; Qin, Xiaosong
2017-04-01
A laboratory- and region-specific trimester-related reference interval for thyroid hormone assessment of pregnant women was recommended. Whether the division by trimester is suitable requires verification. Here, we tried to establish appropriate reference intervals of thyroid-related hormones and antibodies for normal pregnant women in Northeast China. A total of 947 pregnant women who underwent routine prenatal care were grouped via two methods. The first method entailed division by trimester: stages T1, T2, and T3. The second method entailed dividing T1, T2, and T3 stages into two stages each: T1-1, T1-2, T2-1, T2-2, T3-1, and T3-2. Serum levels of TSH, FT3, FT4, Anti-TPO, and Anti-TG were measured by three detection systems. No significant differences were found in TSH values between T1-1 group and the non-pregnant women group. However, the TSH value of the T1-1 group was significantly higher than that of T1-2 group (P<0.05). The TSH values in stage T3-2 increased significantly compared to those in stage T3-1 measured by three different assays (P<0.05). FT4 and FT3 values decreased significantly in the T2-1 and T2-2 stages compared to the previous stage (P<0.05). The serum levels of Anti-TPO and Anti-TG were not having significant differences between the six stages. The diagnosis and treatment of thyroid dysfunction during pregnancy should base on pregnancy- and method-specific reference intervals. More detailed staging is required to assess the thyroid function of pregnant women before 20 gestational weeks. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Pre- and post-treatment techniques for spacecraft water recovery
NASA Technical Reports Server (NTRS)
Putnam, David F.; Colombo, Gerald V.; Chullen, Cinda
1986-01-01
Distillation-based waste water pretreatment and recovered water posttreatment methods are proposed for the NASA Space Station. Laboratory investigation results are reported for two nonoxidizing urine pretreatment formulas (hexadecyl trimethyl ammonium bromide and Cu/Cr) which minimize the generation of volatile organics, thereby significantly reducing posttreatment requirements. Three posttreatment methods (multifiltration, reverse osmosis, and UV-assisted ozone oxidation) have been identified which appear promising for the removal of organic contaminants from recovered water.
Kendall, Lynne; Parsons, Jonathan M; Sloper, Patricia; Lewin, Robert J P
2007-04-01
To assess a novel method for assessing risk and providing advice about activity to children and young people with congenital cardiac disease and their parents. Questionnaire survey in outpatient clinics at a tertiary centre dealing with congenital cardiac disease, and 6 peripheral clinics. Children or their parents completed a brief questionnaire. If this indicated a desire for help, or a serious mismatch between advised and real level of activity, they were telephoned by a physiotherapist. MAIN MEASURES OF OUTCOME: Knowledge about appropriate levels of activity, and identification of the number exercising at an unsafe level, the number seeking help, and the type of help required. 253/258 (98.0%) questionnaires were returned, with 119/253 (47.0%) showing incorrect responses in their belief about their advised level of exercise; 17/253 (6.7%) had potentially dangerous overestimation of exercise. Asked if they wanted advice 93/253 (36.8%) said "yes", 43/253 (17.0%) "maybe", and 117/253 (46.2%) "no". Of those contacted by phone to give advice, 72.7% (56/77) required a single contact and 14.3% (11/77) required an intervention that required more intensive contact lasting from 2 up to 12 weeks. Of the cohort, 3.9% (3/77) were taking part in activities that put them at significant risk. There is a significant lack of knowledge about appropriate levels of activity, and a desire for further advice, in children and young people with congenital cardiac disease. A few children may be at very significant risk. These needs can be identified, and clinical risk reduced, using a brief self-completed questionnaire combined with telephone follow-up from a suitably knowledgeable physiotherapist.
Soltis, Robert; Verlinden, Nathan; Kruger, Nicholas; Carroll, Ailey; Trumbo, Tiffany
2015-02-17
To determine if the process-oriented guided inquiry learning (POGIL) teaching strategy improves student performance and engages higher-level thinking skills of first-year pharmacy students in an Introduction to Pharmaceutical Sciences course. Overall examination scores and scores on questions categorized as requiring either higher-level or lower-level thinking skills were compared in the same course taught over 3 years using traditional lecture methods vs the POGIL strategy. Student perceptions of the latter teaching strategy were also evaluated. Overall mean examination scores increased significantly when POGIL was implemented. Performance on questions requiring higher-level thinking skills was significantly higher, whereas performance on questions requiring lower-level thinking skills was unchanged when the POGIL strategy was used. Student feedback on use of this teaching strategy was positive. The use of the POGIL strategy increased student overall performance on examinations, improved higher-level thinking skills, and provided an interactive class setting.
Buring, Shauna M.; Papas, Elizabeth
2013-01-01
Objective. To assess doctor of pharmacy (PharmD) students’ mathematics ability by content area before and after completing a required pharmaceutical calculations course and to analyze changes in scores. Methods. A mathematics skills assessment was administered to 2 cohorts of pharmacy students (class of 2013 and 2014) before and after completing a pharmaceutical calculations course. The posttest was administered to the second cohort 6 months after completing the course to assess knowledge retention. Results. Both cohorts performed significantly better on the posttest (cohort 1, 13% higher scores; cohort 2, 15.9% higher scores). Significant improvement on posttest scores was observed in 6 of the 10 content areas for cohorts 1 and 2. Both cohorts scored lower in percentage calculations on the posttest than on the pretest. Conclusions. A required, 1-credit-hour pharmaceutical calculations course improved PharmD students’ overall ability to perform fundamental and application-based calculations. PMID:23966727
The effect of ordinances requiring smoke-free restaurants and bars on revenues: a follow-up.
Glantz, S A; Smith, L R
1997-01-01
OBJECTIVES: The purpose of this study was to extend an earlier evaluation of the economic effects of ordinances requiring smoke-free restaurants and bars. METHODS: Sales tax data for 15 cities with smoke-free restaurant ordinances, 5 cities and 2 counties with smoke-free bar ordinances, and matched comparison locations were analyzed by multiple regression, including time and a dummy variable for the ordinance. RESULTS: Ordinances had no significant effect on the fraction of total retail sales that went to eating and drinking places or on the ratio between sales in communities with ordinances and sales in comparison communities. Ordinances requiring smoke-free bars had no significant effect on the fraction of revenues going to eating and drinking places that serve all types of liquor. CONCLUSIONS: Smoke-free ordinances do not adversely affect either restaurant or bar sales. PMID:9357356
Information in medical decision making: how consistent is our management?
Lorence, Daniel P; Spink, Amanda; Jameson, Robert
2002-01-01
The use of outcomes data in clinical environments requires a correspondingly greater variety of information used in decision making, the measurement of quality, and clinical performance. As information becomes integral in the decision-making process, trustworthy decision support data are required. Using data from a national census of certified health information managers, variation in automated data quality management practices was examined. Relatively low overall adoption of automated data management exists in health care organizations, with significant geographic and practice setting variation. Nonuniform regional adoption of computerized data management exists, despite national mandates that promote and in some cases require uniform adoption. Overall, a significant number of respondents (42.7%) indicated that they had not adopted policies and procedures to direct the timeliness of data capture, with 57.3% having adopted such practices. The inconsistency of patient data policy suggests that provider organizations do not use uniform information management methods, despite growing federal mandates to do so.
On Design Experiment Teaching in Engineering Quality Cultivation
ERIC Educational Resources Information Center
Chen, Xiao
2008-01-01
Design experiment refers to that designed and conducted by students independently and is surely an important method to cultivate students' comprehensive quality. According to the development and requirements of experimental teaching, this article carries out a study and analysis on the purpose, significance, denotation, connotation and…
DOT National Transportation Integrated Search
1968-05-01
Conditions arise during construction of bases with Portland cement stabilized soils which require close programming of work. Therefore, time is of significant importance. : That is the objective of this report; to evaluate a method by which considera...
A COMPARISON OF BULK SEDIMENT TOXICITY TESTING METHODS AND SEDIMENT ELUTRIATE TOXICITY
Bulk sediment toxicity tests are routinely used to assess the level and extent of contamination in natural sediments. While reliable, these tests can be resource intensive, requiring significant outlays of time and materials. The purpose of this study was to compare the results ...
A nearest-neighbour discretisation of the regularized stokeslet boundary integral equation
NASA Astrophysics Data System (ADS)
Smith, David J.
2018-04-01
The method of regularized stokeslets is extensively used in biological fluid dynamics due to its conceptual simplicity and meshlessness. This simplicity carries a degree of cost in computational expense and accuracy because the number of degrees of freedom used to discretise the unknown surface traction is generally significantly higher than that required by boundary element methods. We describe a meshless method based on nearest-neighbour interpolation that significantly reduces the number of degrees of freedom required to discretise the unknown traction, increasing the range of problems that can be practically solved, without excessively complicating the task of the modeller. The nearest-neighbour technique is tested against the classical problem of rigid body motion of a sphere immersed in very viscous fluid, then applied to the more complex biophysical problem of calculating the rotational diffusion timescales of a macromolecular structure modelled by three closely-spaced non-slender rods. A heuristic for finding the required density of force and quadrature points by numerical refinement is suggested. Matlab/GNU Octave code for the key steps of the algorithm is provided, which predominantly use basic linear algebra operations, with a full implementation being provided on github. Compared with the standard Nyström discretisation, more accurate and substantially more efficient results can be obtained by de-refining the force discretisation relative to the quadrature discretisation: a cost reduction of over 10 times with improved accuracy is observed. This improvement comes at minimal additional technical complexity. Future avenues to develop the algorithm are then discussed.
Non-Earth-centric life detection
NASA Technical Reports Server (NTRS)
Conrad, P. G.; Nealson, K. H.
2000-01-01
Our hope is that life will, bit by bit, reveal the clues that will allow us to piece together enough evidence to recognize it whenever and however it presents itself. Indisputable evidence is measurable, statistically meaningful and independent of the nature of the life it defines. That the evidence for life be measurable is a fundamental requirement of the scientific method, as is the requirement for statistical significance, and this quantitation is what enables us to differentiate the measurable criteria of candidate biosignatures from a background (host environment).
Gaudelli, Cinzia; Ménard, Jérémie; Mutch, Jennifer; Laflamme, G-Yves; Petit, Yvan; Rouleau, Dominique M
2014-11-01
This paper aims to determine the strongest fixation method for split type greater tuberosity fractures of the proximal humerus by testing and comparing three fixation methods: a tension band with No. 2 wire suture, a double-row suture bridge with suture anchors, and a manually contoured calcaneal locking plate. Each method was tested on eight porcine humeri. A osteotomy of the greater tuberosity was performed 50° to the humeral shaft and then fixed according to one of three methods. The humeri were then placed in a testing apparatus and tension was applied along the supraspinatus tendon using a thermoelectric cooling clamp. The load required to produce 3mm and 5mm of displacement, as well as complete failure, was recorded using an axial load cell. The average load required to produce 3mm and 5mm of displacement was 658N and 1112N for the locking plate, 199N and 247N for the double row, and 75N and 105N for the tension band. The difference between the three groups was significant (P<0.01). The average load to failure of the locking plate (810N) was significantly stronger than double row (456N) and tension band (279N) (P<0.05). The stiffness of the locking plate (404N/mm) was significantly greater than double row (71N/mm) and tension band (33N/mm) (P<0.01). Locking plate fixation provides the strongest and stiffest biomechanical fixation for split type greater tuberosity fractures. Copyright © 2014 Elsevier Ltd. All rights reserved.
Roles and methods of performance evaluation of hospital academic leadership.
Zhou, Ying; Yuan, Huikang; Li, Yang; Zhao, Xia; Yi, Lihua
2016-01-01
The rapidly advancing implementation of public hospital reform urgently requires the identification and classification of a pool of exceptional medical specialists, corresponding with incentives to attract and retain them, providing a nucleus of distinguished expertise to ensure public hospital preeminence. This paper examines the significance of academic leadership, from a strategic management perspective, including various tools, methods and mechanisms used in the theory and practice of performance evaluation, and employed in the selection, training and appointment of academic leaders. Objective methods of assessing leadership performance are also provided for reference.
A Method for Monitoring Organic Chlorides, Hydrochloric Acid and Chlorine in Air
NASA Technical Reports Server (NTRS)
Dennison, J. E.; Menichelli, R. P.
1971-01-01
While not commonly presented in nonurban atmospheres, organic chlorides, hydrochloric acid and chlorine are significant in industrial air pollution and industrial hygiene. Based on a microcoulometer, a much more sensitive method than has heretofore been available has been developed for monitoring these air impurities. The method has a response time (90%) of about twenty seconds, requires no calibration, is accurate to +/- 2.5%, and specific except for bromide and iodide interferences. The instrument is portable and has been operated unattended for 18 hours without difficulty.
NASA Technical Reports Server (NTRS)
Stehle, Roy H.; Ogier, Richard G.
1993-01-01
Alternatives for realizing a packet-based network switch for use on a frequency division multiple access/time division multiplexed (FDMA/TDM) geostationary communication satellite were investigated. Each of the eight downlink beams supports eight directed dwells. The design needed to accommodate multicast packets with very low probability of loss due to contention. Three switch architectures were designed and analyzed. An output-queued, shared bus system yielded a functionally simple system, utilizing a first-in, first-out (FIFO) memory per downlink dwell, but at the expense of a large total memory requirement. A shared memory architecture offered the most efficiency in memory requirements, requiring about half the memory of the shared bus design. The processing requirement for the shared-memory system adds system complexity that may offset the benefits of the smaller memory. An alternative design using a shared memory buffer per downlink beam decreases circuit complexity through a distributed design, and requires at most 1000 packets of memory more than the completely shared memory design. Modifications to the basic packet switch designs were proposed to accommodate circuit-switched traffic, which must be served on a periodic basis with minimal delay. Methods for dynamically controlling the downlink dwell lengths were developed and analyzed. These methods adapt quickly to changing traffic demands, and do not add significant complexity or cost to the satellite and ground station designs. Methods for reducing the memory requirement by not requiring the satellite to store full packets were also proposed and analyzed. In addition, optimal packet and dwell lengths were computed as functions of memory size for the three switch architectures.
The effect of filtering on the determination of lunar tides
NASA Astrophysics Data System (ADS)
Palumbo, A.; Mazzarella, A.
1980-01-01
The determination of lunar tides obtained by combination of a filtering process and the fixed lunar age technique is proposed. It is shown that such a method allows a reduction of the signal-to-noise ratio without altering the amplitude and the phase angle of the signal. It consequently allows the significant determination of the lunar semidiurnal component M2 from the series of data shorter than those required by other methods and the deduction of other interesting lunisolar components which have not previously been significantly determined in surface pressure and temperature data. The analysis of the data for Gan, Vesuvian Observatory and the Eiffel Tower have provided new determinations of L2(p) and have allowed comparison between the results obtained by the present and other methods.
Settlement of reactive power compensation in the light of white certificates
NASA Astrophysics Data System (ADS)
Zajkowski, Konrad
2017-10-01
The article discusses the problem of the determination of savings on active energy as a result of a reactive power compensation. Statutory guidance on the required energy audit to obtain white certificates in the European Union was followed. The analysis was made on the basis of the Polish Law. The paper presents a detailed analytical method and an estimation method taking into account the impact on the line, the transformer and the generator. According to the relevant guidelines in the European Union, the reduction of CO2 emissions by calculating the saving of active power should be determined. The detailed method and an estimation method proposed for the determination of savings on active energy as a result of the reactive power compensation carried out possess some errors and inconvenience. The detailed method requires knowledge of the network topology and a determination of reactive power Q at each point of the network. The estimation method of analysis is easy in execution, especially if the consumer of energy is the main or the most significant purchaser of electricity in the network. Unfortunately, this latter method can be used only for activities that do not require high computational accuracy. The results obtained by this method are approximate values that can be used for the calculation of economic indicators. The estimation method is suitable for determining the number of white certificates when a power audit concerns a recipient of electricity, the structure of which is a large number of divisions scattered at many different locations in the power system.
Cha, Dong Ik; Lee, Min Woo; Song, Kyoung Doo; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga
2017-06-01
To compare the accuracy and required time for image fusion of real-time ultrasound (US) with pre-procedural magnetic resonance (MR) images between positioning auto-registration and manual registration for percutaneous radiofrequency ablation or biopsy of hepatic lesions. This prospective study was approved by the institutional review board, and all patients gave written informed consent. Twenty-two patients (male/female, n = 18/n = 4; age, 61.0 ± 7.7 years) who were referred for planning US to assess the feasibility of radiofrequency ablation (n = 21) or biopsy (n = 1) for focal hepatic lesions were included. One experienced radiologist performed the two types of image fusion methods in each patient. The performance of auto-registration and manual registration was evaluated. The accuracy of the two methods, based on measuring registration error, and the time required for image fusion for both methods were recorded using in-house software and respectively compared using the Wilcoxon signed rank test. Image fusion was successful in all patients. The registration error was not significantly different between the two methods (auto-registration: median, 3.75 mm; range, 1.0-15.8 mm vs. manual registration: median, 2.95 mm; range, 1.2-12.5 mm, p = 0.242). The time required for image fusion was significantly shorter with auto-registration than with manual registration (median, 28.5 s; range, 18-47 s, vs. median, 36.5 s; range, 14-105 s, p = 0.026). Positioning auto-registration showed promising results compared with manual registration, with similar accuracy and even shorter registration time.
Rapid Analysis of Copper Ore in Pre-Smelter Head Flow Slurry by Portable X-ray Fluorescence.
Burnett, Brandon J; Lawrence, Neil J; Abourahma, Jehad N; Walker, Edward B
2016-05-01
Copper laden ore is often concentrated using flotation. Before the head flow slurry can be smelted, it is important to know the concentration of copper and contaminants. The concentration of copper and other elements fluctuate significantly in the head flow, often requiring modification of the concentrations in the slurry prior to smelting. A rapid, real-time analytical method is needed to support on-site optimization of the smelter feedstock. A portable, handheld X-ray fluorescence spectrometer was utilized to determine the copper concentration in a head flow suspension at the slurry origin. The method requires only seconds and is reliable for copper concentrations of 2.0-25%, typically encountered in such slurries. © The Author(s) 2016.
Threading on ADI Cast Iron, Developing Tools and Conditions
NASA Astrophysics Data System (ADS)
Elósegui, I.; de Lacalle, L. N. López
2011-01-01
The present work is focussed on the improvement of the design and performance of the taps used for making threaded holes in ADI (Austempered Ductile Iron). It is divided in two steps: a) The development of a method valid to compare the taps wear without reaching the end of their life, measuring the required torque to make one threaded hole, after having made previously a significant number of threaded holes. The tap wear causes some teeth geometrical changes, that supposes an increase in the required torque and axial force. b) The taps wear comparison method is open to apply on different PVD coated taps, AlTiN, AlCrSiN, AlTiSiN, , and to different geometries.
Reducing the cost of using collocation to compute vibrational energy levels: Results for CH2NH.
Avila, Gustavo; Carrington, Tucker
2017-08-14
In this paper, we improve the collocation method for computing vibrational spectra that was presented in the work of Avila and Carrington, Jr. [J. Chem. Phys. 143, 214108 (2015)]. Known quadrature and collocation methods using a Smolyak grid require storing intermediate vectors with more elements than points on the Smolyak grid. This is due to the fact that grid labels are constrained among themselves and basis labels are constrained among themselves. We show that by using the so-called hierarchical basis functions, one can significantly reduce the memory required. In this paper, the intermediate vectors have only as many elements as the Smolyak grid. The ideas are tested by computing energy levels of CH 2 NH.
Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions
Barrett, Harrison H.; Dainty, Christopher; Lara, David
2008-01-01
Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255
Applying formal methods and object-oriented analysis to existing flight software
NASA Technical Reports Server (NTRS)
Cheng, Betty H. C.; Auernheimer, Brent
1993-01-01
Correctness is paramount for safety-critical software control systems. Critical software failures in medical radiation treatment, communications, and defense are familiar to the public. The significant quantity of software malfunctions regularly reported to the software engineering community, the laws concerning liability, and a recent NRC Aeronautics and Space Engineering Board report additionally motivate the use of error-reducing and defect detection software development techniques. The benefits of formal methods in requirements driven software development ('forward engineering') is well documented. One advantage of rigorously engineering software is that formal notations are precise, verifiable, and facilitate automated processing. This paper describes the application of formal methods to reverse engineering, where formal specifications are developed for a portion of the shuttle on-orbit digital autopilot (DAP). Three objectives of the project were to: demonstrate the use of formal methods on a shuttle application, facilitate the incorporation and validation of new requirements for the system, and verify the safety-critical properties to be exhibited by the software.
The ReaxFF reactive force-field: Development, applications, and future directions
Senftle, Thomas; Hong, Sungwook; Islam, Md Mahbubul; ...
2016-03-04
The reactive force-field (ReaxFF) interatomic potential is a powerful computational tool for exploring, developing and optimizing material properties. Methods based on the principles of quantum mechanics (QM), while offering valuable theoretical guidance at the electronic level, are often too computationally intense for simulations that consider the full dynamic evolution of a system. Alternatively, empirical interatomic potentials that are based on classical principles require significantly fewer computational resources, which enables simulations to better describe dynamic processes over longer timeframes and on larger scales. Such methods, however, typically require a predefined connectivity between atoms, precluding simulations that involve reactive events. The ReaxFFmore » method was developed to help bridge this gap. Approaching the gap from the classical side, ReaxFF casts the empirical interatomic potential within a bond-order formalism, thus implicitly describing chemical bonding without expensive QM calculations. As a result, this article provides an overview of the development, application, and future directions of the ReaxFF method.« less
Kalivas, John H; Georgiou, Constantinos A; Moira, Marianna; Tsafaras, Ilias; Petrakis, Eleftherios A; Mousdis, George A
2014-04-01
Quantitative analysis of food adulterants is an important health and economic issue that needs to be fast and simple. Spectroscopy has significantly reduced analysis time. However, still needed are preparations of analyte calibration samples matrix matched to prediction samples which can be laborious and costly. Reported in this paper is the application of a newly developed pure component Tikhonov regularization (PCTR) process that does not require laboratory prepared or reference analysis methods, and hence, is a greener calibration method. The PCTR method requires an analyte pure component spectrum and non-analyte spectra. As a food analysis example, synchronous fluorescence spectra of extra virgin olive oil samples adulterated with sunflower oil is used. Results are shown to be better than those obtained using ridge regression with reference calibration samples. The flexibility of PCTR allows including reference samples and is generic for use with other instrumental methods and food products. Copyright © 2013 Elsevier Ltd. All rights reserved.
Test of a mosquito eggshell isolation method and subsampling procedure.
Turner, P A; Streever, W J
1997-03-01
Production of Aedes vigilax, the common salt-marsh mosquito, can be assessed by determining eggshell densities found in soil. In this study, 14 field-collected eggshell samples were used to test a subsampling technique and compare eggshell counts obtained with a flotation method to those obtained by direct examination of sediment (DES). Relative precision of the subsampling technique was assessed by determining the minimum number of subsamples required to estimate the true mean and confidence interval of a sample at a predetermined confidence level. A regression line was fitted to cube-root transformed eggshell counts obtained from flotation and DES and found to be significant (P < 0.001, r2 = 0.97). The flotation method allowed processing of samples in about one-third of the time required by DES, but recovered an average of 44% of the eggshells present. Eggshells obtained with the flotation method can be used to predict those from DES using the following equation: DES count = [1.386 x (flotation count)0.33 - 0.01]3.
Welding methods for joining thermoplastic polymers for the hermetic enclosure of medical devices.
Amanat, Negin; James, Natalie L; McKenzie, David R
2010-09-01
New high performance polymers have been developed that challenge traditional encapsulation materials for permanent active medical implants. The gold standard for hermetic encapsulation for implants is a titanium enclosure which is sealed using laser welding. Polymers may be an alternative encapsulation material. Although many polymers are biocompatible, and permeability of polymers may be reduced to acceptable levels, the ability to create a hermetic join with an extended life remains the barrier to widespread acceptance of polymers for this application. This article provides an overview of the current techniques used for direct bonding of polymers, with a focus on thermoplastics. Thermal bonding methods are feasible, but some take too long and/or require two stage processing. Some methods are not suitable because of excessive heat load which may be delivered to sensitive components within the capsule. Laser welding is presented as the method of choice; however the establishment of suitable laser process parameters will require significant research. 2010. Published by Elsevier Ltd.
McGinitie, Teague M; Ebrahimi-Najafabadi, Heshmatollah; Harynuk, James J
2014-01-17
A new method for estimating the thermodynamic parameters of ΔH(T0), ΔS(T0), and ΔCP for use in thermodynamic modeling of GC×GC separations has been developed. The method is an alternative to the traditional isothermal separations required to fit a three-parameter thermodynamic model to retention data. Herein, a non-linear optimization technique is used to estimate the parameters from a series of temperature-programmed separations using the Nelder-Mead simplex algorithm. With this method, the time required to obtain estimates of thermodynamic parameters a series of analytes is significantly reduced. This new method allows for precise predictions of retention time with the average error being only 0.2s for 1D separations. Predictions for GC×GC separations were also in agreement with experimental measurements; having an average relative error of 0.37% for (1)tr and 2.1% for (2)tr. Copyright © 2013 Elsevier B.V. All rights reserved.
Statewide Implementation of Evidence-Based Programs
ERIC Educational Resources Information Center
Fixsen, Dean; Blase, Karen; Metz, Allison; van Dyke, Melissa
2013-01-01
Evidence-based programs will be useful to the extent they produce benefits to individuals on a socially significant scale. It appears the combination of effective programs and effective implementation methods is required to assure consistent uses of programs and reliable benefits to children and families. To date, focus has been placed primarily…
ERIC Educational Resources Information Center
Gervais, Matthew M.
2017-01-01
Experimental economic games reveal significant population variation in human social behavior. However, most protocols involve anonymous recipients, limiting their validity to fleeting interactions. Understanding human relationship dynamics will require methods with the virtues of economic games that also tap recipient identity-conditioned…
A net fishing enrichment strategy for colorimetric detection of E. coli O157:H7
USDA-ARS?s Scientific Manuscript database
The strict regulatory requirements for pathogen monitoring in food systems to ensure safety demands that the detection method can recognize small numbers of pathogens. Although significant efforts on the development of biosensors have been reported with marked improvement in sensitivity, appropriate...
Future float zone development in industry
NASA Technical Reports Server (NTRS)
Sandfort, R. M.
1980-01-01
The present industrial requirements for float zone silicon are summarized. Developments desired by the industry in the future are reported. The five most significant problems faced today by the float zone crystal growth method in industry are discussed. They are economic, large diameter, resistivity uniformity, control of carbon, and swirl defects.
Leadership Succession: Future-Proofing Pipelines
ERIC Educational Resources Information Center
Taylor, Saul; Youngs, Howard
2018-01-01
The challenges in deaf education illustrate the requirement and importance of leadership in this specialized field. The significant and impending talent depletion unfolding as baby-boomers retire, positions leadership succession planning as a strategic issue. This mixed methods study is the first of its kind in New Zealand. The aim is to…
USDA-ARS?s Scientific Manuscript database
A growing biofuels industry requires the development of effective methods to educate farmers, government, and agribusiness about biofuel feedstock production if the market is going to significantly expand beyond first generation biofuels. Extension and outreach education provides a conduit for impor...
USDA-ARS?s Scientific Manuscript database
Market demands for cotton varieties with improved fiber properties also call for the development of fast, reliable analytical methods for monitoring fiber development and measuring their properties. Currently, cotton breeders rely on instrumentation that can require significant amounts of sample, w...
2014-09-18
methods of flight plan optimization, and yielded such techniques as: parallel A* (Gudaitis, 1994), Multi-Objective Traveling Salesman algorithms...1 Problem Statement...currently their utilization comes with a price: Problem Statement “Today’s unmanned systems require significant human interaction to operate. As
Current Treatment of Lower Gastrointestinal Hemorrhage
Raphaeli, Tal; Menon, Raman
2012-01-01
Massive lower gastrointestinal bleeding is a significant and expensive problem that requires methodical evaluation, management, and treatment. After initial resuscitation, care should be taken to localize the site of bleeding. Once localized, lesions can then be treated with endoscopic or angiographic interventions, reserving surgery for ongoing or recurrent bleeding. PMID:24294124
Risk of Performance Decrement and Crew Illness Due to an Inadequate Food System
NASA Technical Reports Server (NTRS)
Douglas, Grace L.; Cooper, Maya; Bermudez-Aguirre, Daniela; Sirmons, Takiyah
2016-01-01
NASA is preparing for long duration manned missions beyond low-Earth orbit that will be challenged in several ways, including long-term exposure to the space environment, impacts to crew physiological and psychological health, limited resources, and no resupply. The food system is one of the most significant daily factors that can be altered to improve human health, and performance during space exploration. Therefore, the paramount importance of determining the methods, technologies, and requirements to provide a safe, nutritious, and acceptable food system that promotes crew health and performance cannot be underestimated. The processed and prepackaged food system is the main source of nutrition to the crew, therefore significant losses in nutrition, either through degradation of nutrients during processing and storage or inadequate food intake due to low acceptability, variety, or usability, may significantly compromise the crew's health and performance. Shelf life studies indicate that key nutrients and quality factors in many space foods degrade to concerning levels within three years, suggesting that food system will not meet the nutrition and acceptability requirements of a long duration mission beyond low-Earth orbit. Likewise, mass and volume evaluations indicate that the current food system is a significant resource burden. Alternative provisioning strategies, such as inclusion of bioregenerative foods, are challenged with resource requirements, and food safety and scarcity concerns. Ensuring provisioning of an adequate food system relies not only upon determining technologies, and requirements for nutrition, quality, and safety, but upon establishing a food system that will support nutritional adequacy, even with individual crew preference and self-selection. In short, the space food system is challenged to maintain safety, nutrition, and acceptability for all phases of an exploration mission within resource constraints. This document presents the evidence for the Risk of Performance Decrement and Crew Illness Due to an Inadequate Food System and the gaps in relation to exploration, as identified by the NASA Human Research Program (HRP). The research reviewed here indicates strategies to establish methods, technologies, and requirements that increase food stability, support adequate nutrition, quality, and variety, enable supplementation with grow-pick-and-eat salad crops, ensure safety, and reduce resource use. Obtaining the evidence to establish an adequate food system is essential, as the resources allocated to the food system may be defined based on the data relating nutritional stability and food quality requirements to crew performance and health.
SNAP 19 Pioneer F and G. Final Report
DOE R&D Accomplishments Database
1973-06-01
The generator developed for the Pioneer mission evolved from the SNAP 19 RTG`s launched aboard the NIMBUS III spacecraft. In order to satisfy the power requirements and environment of earth escape trajectory, significant modifications were made to the thermoelectric converter, heat source, and structural configuration. Specifically, a TAGS 2N thermoelectric couple was designed to provide higher efficiency and improved long term power performance, and the electrical circuitry was modified to yield very low magnetic field from current flow in the RTG. A new heat source was employed to satisfy operational requirements and its integration with the generator required alteration to the method of providing support to the fuel capsule.
NASA Astrophysics Data System (ADS)
McBeth, Rafe A.
Space radiation exposure to astronauts will need to be carefully monitored on future missions beyond low earth orbit. NASA has proposed an updated radiation risk framework that takes into account a significant amount of radiobiological and heavy ion track structure information. These models require active radiation detection systems to measure the energy and ion charge Z. However, current radiation detection systems cannot meet these demands. The aim of this study was to investigate several topics that will help next generation detection systems meet the NASA objectives. Specifically, this work investigates the required spatial resolution to avoid coincident events in a detector, the effects of energy straggling and conversion of dose from silicon to water, and methods for ion identification (Z) using machine learning. The main results of this dissertation are as follows: 1. Spatial resolution on the order of 0.1 cm is required for active space radiation detectors to have high confidence in identifying individual particles, i.e., to eliminate coincident events. 2. Energy resolution of a detector system will be limited by energy straggling effects and the conversion of dose in silicon to dose in biological tissue (water). 3. Machine learning methods show strong promise for identification of ion charge (Z) with simple detector designs.
Achieving femoral artery hemostasis after cardiac catheterization: a comparison of methods.
Schickel, S I; Adkisson, P; Miracle, V; Cronin, S N
1999-11-01
Cardiac catheterization is a common procedure that involves the introduction of a small sheath (5F-8F) into the femoral artery for insertion of other diagnostic catheters. After cardiac catheterization, local compression of the femoral artery is required to prevent bleeding and to achieve hemostasis. Traditional methods of achieving hemostasis require significant time and close supervision by medical personnel and can contribute to patients' discomfort. VasoSeal is a recently developed device that delivers absorbable collagen into the supra-arterial space to promote hemostasis. To compare outcomes between patients receiving a collagen plug and patients in whom a traditional method of achieving hemostasis was used after diagnostic cardiac catheterization. An outcomes tracking tool was used to analyze the medical records of 95 patients in whom a traditional method was used (traditional group) and 81 patients in whom VasoSeal was used (device group) to achieve hemostasis. Complications at the femoral access site, patients' satisfaction, and times to hemostasis, ambulation, and discharge were compared. Hematomas of 6-cm diameter occurred in 5.3% of the traditional group; no complications occurred in the device group. The device group also achieved hemostasis faster and had earlier ambulation (P < .001). Patients in the device group were discharged a mean of 5 hours sooner than patients in the traditional group (P < .05). No significant differences were found in patients' satisfaction. VasoSeal is a safe and effective method of achieving hemostasis after cardiac catheterization that can hasten time to hemostasis, ambulation, and discharge.
Sample size calculation for a proof of concept study.
Yin, Yin
2002-05-01
Sample size calculation is vital for a confirmatory clinical trial since the regulatory agencies require the probability of making Type I error to be significantly small, usually less than 0.05 or 0.025. However, the importance of the sample size calculation for studies conducted by a pharmaceutical company for internal decision making, e.g., a proof of concept (PoC) study, has not received enough attention. This article introduces a Bayesian method that identifies the information required for planning a PoC and the process of sample size calculation. The results will be presented in terms of the relationships between the regulatory requirements, the probability of reaching the regulatory requirements, the goalpost for PoC, and the sample size used for PoC.
Evaluation of saliva collection devices for the analysis of proteins.
Topkas, Eleni; Keith, Patricia; Dimeski, Goce; Cooper-White, Justin; Punyadeera, Chamindie
2012-07-11
Human saliva mirrors the body's health and can be collected non-invasively, does not require specialized skills and is suitable for large population based screening programs. The aims were twofold: to evaluate the suitability of commercially available saliva collection devices for quantifying proteins present in saliva and to provide levels for C-reactive protein (CRP), myoglobin, and immunoglobin E (IgE) in saliva of healthy individuals as a baseline for future studies. Saliva was collected from healthy volunteers (n=17, ages 18-33years). The following collection methods were evaluated: drool; Salimetrics® Oral Swab (SOS); Salivette® Cotton and Synthetic (Sarstedt) and Greiner Bio-One Saliva Collection System (GBO SCS®). We used AlphaLISA® assays to measure CRP, IgE and myoglobin levels in human saliva. Significant (p<0.05) differences in the salivary flow rates were observed based on the method of collection, i.e. salivary flow rates were significantly lower (p<0.05) in unstimulated saliva (i.e. drool and SOS), when compared with mechanically stimulated methods (p<0.05) (Salivette® Cotton and Synthetic) and acid stimulated method (p<0.05) (SCS®). Saliva collected using SOS yielded significantly (p<0.05) lower concentrations of myoglobin and CRP, whilst, saliva collected using the Salivette® Cotton and Synthetic swab yielded significantly (p<0.05) lower myoglobin and IgE concentrations respectively. The results demonstrated significantly relevant differences in analyte levels based on the collection method. Significant differences in the salivary flow rates were also observed depending on the saliva collection method. The data provide preliminary baseline values for salivary CRP, myoglobin, and IgE levels in healthy participants and based on the collection method. Copyright © 2012 Elsevier B.V. All rights reserved.
Chen, Zhongchuan Will; Kohan, Jessica; Perkins, Sherrie L.; Hussong, Jerry W.; Salama, Mohamed E.
2014-01-01
Background: Whole slide imaging (WSI) is widely used for education and research, but is increasingly being used to streamline clinical workflow. We present our experience with regard to satisfaction and time utilization using oil immersion WSI for presentation of blood/marrow aspirate smears, core biopsies, and tissue sections in hematology/oncology tumor board/treatment planning conferences (TPC). Methods: Lymph nodes and bone marrow core biopsies were scanned at ×20 magnification and blood/marrow smears at 83X under oil immersion and uploaded to an online library with areas of interest to be displayed annotated digitally via web browser. Pathologist time required to prepare slides for scanning was compared to that required to prepare for microscope projection (MP). Time required to present cases during TPC was also compared. A 10-point evaluation survey was used to assess clinician satisfaction with each presentation method. Results: There was no significant difference in hematopathologist preparation time between WSI and MP. However, presentation time was significantly less for WSI compared to MP as selection and annotation of slides was done prior to TPC with WSI, enabling more efficient use of TPC presentation time. Survey results showed a significant increase in satisfaction by clinical attendees with regard to image quality, efficiency of presentation of pertinent findings, aid in clinical decision-making, and overall satisfaction regarding pathology presentation. A majority of respondents also noted decreased motion sickness with WSI. Conclusions: Whole slide imaging, particularly with the ability to use oil scanning, provides higher quality images compared to MP and significantly increases clinician satisfaction. WSI streamlines preparation for TPC by permitting prior slide selection, resulting in greater efficiency during TPC presentation. PMID:25379347
Or-Tzadikario, Shira; Sopher, Ran; Gefen, Amit
2010-10-01
Adipose tissue engineering is investigated for native fat substitutes and wound healing model systems. Research and clinical applications of bioartificial fat require a quantitative and objective method to continuously measure adipogenesis in living cultures as opposed to currently used culture-destructive techniques that stain lipid droplet (LD) accumulation. To allow standardization, automatic quantification of LD size is further needed, but currently LD size is measured mostly manually. We developed an image processing-based method that does not require staining to monitor adipose cell maturation in vitro nondestructively using optical micrographs taken consecutively during culturing. We employed our method to monitor LD accumulation in 3T3-L1 and mesenchymal stem cells over 37 days. For each cell type, percentage of lipid area, number of droplets per cell, and droplet diameter were obtained every 2-3 days. In 3T3-L1 cultures, high insulin concentration (10 microg/mL) yielded a significantly different (p < 0.01) time course of all three outcome measures. In mesenchymal stem cell cultures, high fetal bovine serum concentration (12.5%) produced significantly more lipid area (p < 0.01). Our method was able to successfully characterize time courses and extents of adipogenesis and is useful for a wide range of applications testing the effects of biochemical, mechanical, and thermal stimulations in tissue engineering of bioartificial fat constructs.
NASA Astrophysics Data System (ADS)
Luiza Bondar, M.; Hoogeman, Mischa; Schillemans, Wilco; Heijmen, Ben
2013-08-01
For online adaptive radiotherapy of cervical cancer, fast and accurate image segmentation is required to facilitate daily treatment adaptation. Our aim was twofold: (1) to test and compare three intra-patient automated segmentation methods for the cervix-uterus structure in CT-images and (2) to improve the segmentation accuracy by including prior knowledge on the daily bladder volume or on the daily coordinates of implanted fiducial markers. The tested methods were: shape deformation (SD) and atlas-based segmentation (ABAS) using two non-rigid registration methods: demons and a hierarchical algorithm. Tests on 102 CT-scans of 13 patients demonstrated that the segmentation accuracy significantly increased by including the bladder volume predicted with a simple 1D model based on a manually defined bladder top. Moreover, manually identified implanted fiducial markers significantly improved the accuracy of the SD method. For patients with large cervix-uterus volume regression, the use of CT-data acquired toward the end of the treatment was required to improve segmentation accuracy. Including prior knowledge, the segmentation results of SD (Dice similarity coefficient 85 ± 6%, error margin 2.2 ± 2.3 mm, average time around 1 min) and of ABAS using hierarchical non-rigid registration (Dice 82 ± 10%, error margin 3.1 ± 2.3 mm, average time around 30 s) support their use for image guided online adaptive radiotherapy of cervical cancer.
Bondar, M Luiza; Hoogeman, Mischa; Schillemans, Wilco; Heijmen, Ben
2013-08-07
For online adaptive radiotherapy of cervical cancer, fast and accurate image segmentation is required to facilitate daily treatment adaptation. Our aim was twofold: (1) to test and compare three intra-patient automated segmentation methods for the cervix-uterus structure in CT-images and (2) to improve the segmentation accuracy by including prior knowledge on the daily bladder volume or on the daily coordinates of implanted fiducial markers. The tested methods were: shape deformation (SD) and atlas-based segmentation (ABAS) using two non-rigid registration methods: demons and a hierarchical algorithm. Tests on 102 CT-scans of 13 patients demonstrated that the segmentation accuracy significantly increased by including the bladder volume predicted with a simple 1D model based on a manually defined bladder top. Moreover, manually identified implanted fiducial markers significantly improved the accuracy of the SD method. For patients with large cervix-uterus volume regression, the use of CT-data acquired toward the end of the treatment was required to improve segmentation accuracy. Including prior knowledge, the segmentation results of SD (Dice similarity coefficient 85 ± 6%, error margin 2.2 ± 2.3 mm, average time around 1 min) and of ABAS using hierarchical non-rigid registration (Dice 82 ± 10%, error margin 3.1 ± 2.3 mm, average time around 30 s) support their use for image guided online adaptive radiotherapy of cervical cancer.
Low rank approximation methods for MR fingerprinting with large scale dictionaries.
Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra
2018-04-01
This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
McLaughlin, P. W.; Kaihatu, J. M.; Irish, J. L.; Taylor, N. R.; Slinn, D.
2013-12-01
Recent hurricane activity in the Gulf of Mexico has led to a need for accurate, computationally efficient prediction of hurricane damage so that communities can better assess risk of local socio-economic disruption. This study focuses on developing robust, physics based non-dimensional equations that accurately predict maximum significant wave height at different locations near a given hurricane track. These equations (denoted as Wave Response Functions, or WRFs) were developed from presumed physical dependencies between wave heights and hurricane characteristics and fit with data from numerical models of waves and surge under hurricane conditions. After curve fitting, constraints which correct for fully developed sea state were used to limit the wind wave growth. When applied to the region near Gulfport, MS, back prediction of maximum significant wave height yielded root mean square errors between 0.22-0.42 (m) at open coast stations and 0.07-0.30 (m) at bay stations when compared to the numerical model data. The WRF method was also applied to Corpus Christi, TX and Panama City, FL with similar results. Back prediction errors will be included in uncertainty evaluations connected to risk calculations using joint probability methods. These methods require thousands of simulations to quantify extreme value statistics, thus requiring the use of reduced methods such as the WRF to represent the relevant physical processes.
Bordelon, B M; Hobday, K A; Hunter, J G
1992-01-01
An unsolved problem of laparoscopic cholecystectomy is the optimal method of removing the gallbladder with thick walls and a large stone burden. Proposed solutions include fascial dilatation, stone crushing, and ultrasonic, high-speed rotary, or laser lithotripsy. Our observation was that extension of the fascial incision to remove the impacted gallbladder was time efficient and did not increase postoperative pain. We reviewed the narcotic requirements of 107 consecutive patients undergoing laparoscopic cholecystectomy. Fifty-two patients required extension of the umbilical incision, and 55 patients did not have their fascial incision enlarged. Parenteral meperidine use was 39.5 +/- 63.6 mg in the patients requiring fascial incision extension and 66.3 +/- 79.2 mg in those not requiring fascial incision extension (mean +/- standard deviation). Oral narcotic requirements were 1.1 +/- 1.5 doses vs 1.3 +/- 1.7 doses in patients with and without incision extension, respectively. The wide range of narcotic use in both groups makes these apparent differences not statistically significant. We conclude that protracted attempts at stone crushing or expensive stone fragmentation devices are unnecessary for the extraction of a difficult gallbladder during laparoscopic cholecystectomy.
Aircraft family design using enhanced collaborative optimization
NASA Astrophysics Data System (ADS)
Roth, Brian Douglas
Significant progress has been made toward the development of multidisciplinary design optimization (MDO) methods that are well-suited to practical large-scale design problems. However, opportunities exist for further progress. This thesis describes the development of enhanced collaborative optimization (ECO), a new decomposition-based MDO method. To support the development effort, the thesis offers a detailed comparison of two existing MDO methods: collaborative optimization (CO) and analytical target cascading (ATC). This aids in clarifying their function and capabilities, and it provides inspiration for the development of ECO. The ECO method offers several significant contributions. First, it enhances communication between disciplinary design teams while retaining the low-order coupling between them. Second, it provides disciplinary design teams with more authority over the design process. Third, it resolves several troubling computational inefficiencies that are associated with CO. As a result, ECO provides significant computational savings (relative to CO) for the test cases and practical design problems described in this thesis. New aircraft development projects seldom focus on a single set of mission requirements. Rather, a family of aircraft is designed, with each family member tailored to a different set of requirements. This thesis illustrates the application of decomposition-based MDO methods to aircraft family design. This represents a new application area, since MDO methods have traditionally been applied to multidisciplinary problems. ECO offers aircraft family design the same benefits that it affords to multidisciplinary design problems. Namely, it simplifies analysis integration, it provides a means to manage problem complexity, and it enables concurrent design of all family members. In support of aircraft family design, this thesis introduces a new wing structural model with sufficient fidelity to capture the tradeoffs associated with component commonality, but of appropriate fidelity for aircraft conceptual design. The thesis also introduces a new aircraft family concept. Unlike most families, the intent is not necessarily to produce all family members. Rather, the family includes members for immediate production and members that address potential future market conditions and/or environmental regulations. The result is a set of designs that yield a small performance penalty today in return for significant future flexibility to produce family members that respond to new market conditions and environmental regulations.
NASA Astrophysics Data System (ADS)
Zhang, Menghua; Ma, Xin; Rong, Xuewen; Tian, Xincheng; Li, Yibin
2017-02-01
This paper exploits an error tracking control method for overhead crane systems for which the error trajectories for the trolley and the payload swing can be pre-specified. The proposed method does not require that the initial payload swing angle remains zero, whereas this requirement is usually assumed in conventional methods. The significant feature of the proposed method is its superior control performance as well as its strong robustness over different or uncertain rope lengths, payload masses, desired positions, initial payload swing angles, and external disturbances. Owing to the same attenuation behavior, the desired error trajectory for the trolley for each traveling distance is not needed to be reset, which is easy to implement in practical applications. By converting the error tracking overhead crane dynamics to the objective system, we obtain the error tracking control law for arbitrary initial payload swing angles. Lyapunov techniques and LaSalle's invariance theorem are utilized to prove the convergence and stability of the closed-loop system. Simulation and experimental results are illustrated to validate the superior performance of the proposed error tracking control method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldstein, S.J.; Hensley, C.A.; Armenta, C.E.
1997-03-01
Recent developments in extraction chromatography have simplified the separation of americium from complex matrices in preparation for {alpha}-spectroscopy relative to traditional methods. Here we present results of procedures developed/adapted for water, air, and bioassay samples with less than 1 g of inorganic residue. Prior analytical methods required the use of a complex, multistage procedure for separation of americium from these matrices. The newer, simplified procedure requires only a single 2 mL extraction chromatographic separation for isolation of Am and lanthanides from other components of the sample. This method has been implemented on an extensive variety of `real` environmental and bioassaymore » samples from the Los Alamos area, and consistently reliable and accurate results with appropriate detection limits have been obtained. The new method increases analytical throughput by a factor of {approx}2 and decreases environmental hazards from acid and mixed-waste generation relative to the prior technique. Analytical accuracy, reproducibility, and reliability are also significantly improved over the more complex and laborious method used previously. 24 refs., 2 figs., 2 tabs.« less
NASA Astrophysics Data System (ADS)
Hritz, Andrew D.; Raymond, Timothy M.; Dutcher, Dabrina D.
2016-08-01
Accurate estimates of particle surface tension are required for models concerning atmospheric aerosol nucleation and activation. However, it is difficult to collect the volumes of atmospheric aerosol required by typical instruments that measure surface tension, such as goniometers or Wilhelmy plates. In this work, a method that measures, ex situ, the surface tension of collected liquid nanoparticles using atomic force microscopy is presented. A film of particles is collected via impaction and is probed using nanoneedle tips with the atomic force microscope. This micro-Wilhelmy method allows for direct measurements of the surface tension of small amounts of sample. This method was verified using liquids, whose surface tensions were known. Particles of ozone oxidized α-pinene, a well-characterized system, were then produced, collected, and analyzed using this method to demonstrate its applicability for liquid aerosol samples. It was determined that oxidized α-pinene particles formed in dry conditions have a surface tension similar to that of pure α-pinene, and oxidized α-pinene particles formed in more humid conditions have a surface tension that is significantly higher.
A generalised significance test for individual communities in networks.
Kojaku, Sadamori; Masuda, Naoki
2018-05-09
Many empirical networks have community structure, in which nodes are densely interconnected within each community (i.e., a group of nodes) and sparsely across different communities. Like other local and meso-scale structure of networks, communities are generally heterogeneous in various aspects such as the size, density of edges, connectivity to other communities and significance. In the present study, we propose a method to statistically test the significance of individual communities in a given network. Compared to the previous methods, the present algorithm is unique in that it accepts different community-detection algorithms and the corresponding quality function for single communities. The present method requires that a quality of each community can be quantified and that community detection is performed as optimisation of such a quality function summed over the communities. Various community detection algorithms including modularity maximisation and graph partitioning meet this criterion. Our method estimates a distribution of the quality function for randomised networks to calculate a likelihood of each community in the given network. We illustrate our algorithm by synthetic and empirical networks.
Graeden, Ellie; Kerr, Justin; Sorrell, Erin M.; Katz, Rebecca
2018-01-01
Managing infectious disease requires rapid and effective response to support decision making. The decisions are complex and require understanding of the diseases, disease intervention and control measures, and the disease-relevant characteristics of the local community. Though disease modeling frameworks have been developed to address these questions, the complexity of current models presents a significant barrier to community-level decision makers in using the outputs of the most scientifically robust methods to support pragmatic decisions about implementing a public health response effort, even for endemic diseases with which they are already familiar. Here, we describe the development of an application available on the internet, including from mobile devices, with a simple user interface, to support on-the-ground decision-making for integrating disease control programs, given local conditions and practical constraints. The model upon which the tool is built provides predictive analysis for the effectiveness of integration of schistosomiasis and malaria control, two diseases with extensive geographical and epidemiological overlap, and which result in significant morbidity and mortality in affected regions. Working with data from countries across sub-Saharan Africa and the Middle East, we present a proof-of-principle method and corresponding prototype tool to provide guidance on how to optimize integration of vertical disease control programs. This method and tool demonstrate significant progress in effectively translating the best available scientific models to support practical decision making on the ground with the potential to significantly increase the efficacy and cost-effectiveness of disease control. Author summary Designing and implementing effective programs for infectious disease control requires complex decision-making, informed by an understanding of the diseases, the types of disease interventions and control measures available, and the disease-relevant characteristics of the local community. Though disease modeling frameworks have been developed to address these questions and support decision-making, the complexity of current models presents a significant barrier to on-the-ground end users. The picture is further complicated when considering approaches for integration of different disease control programs, where co-infection dynamics, treatment interactions, and other variables must also be taken into account. Here, we describe the development of an application available on the internet with a simple user interface, to support on-the-ground decision-making for integrating disease control, given local conditions and practical constraints. The model upon which the tool is built provides predictive analysis for the effectiveness of integration of schistosomiasis and malaria control, two diseases with extensive geographical and epidemiological overlap. This proof-of-concept method and tool demonstrate significant progress in effectively translating the best available scientific models to support pragmatic decision-making on the ground, with the potential to significantly increase the impact and cost-effectiveness of disease control. PMID:29649260
Machado, G D.C.; Paiva, L M.C.; Pinto, G F.; Oestreicher, E G.
2001-03-08
1The Enantiomeric Ratio (E) of the enzyme, acting as specific catalysts in resolution of enantiomers, is an important parameter in the quantitative description of these chiral resolution processes. In the present work, two novel methods hereby called Method I and II, for estimating E and the kinetic parameters Km and Vm of enantiomers were developed. These methods are based upon initial rate (v) measurements using different concentrations of enantiomeric mixtures (C) with several molar fractions of the substrate (x). Both methods were tested using simulated "experimental data" and actual experimental data. Method I is easier to use than Method II but requires that one of the enantiomers is available in pure form. Method II, besides not requiring the enantiomers in pure form shown better results, as indicated by the magnitude of the standard errors of estimates. The theoretical predictions were experimentally confirmed by using the oxidation of 2-butanol and 2-pentanol catalyzed by Thermoanaerobium brockii alcohol dehydrogenase as reaction models. The parameters E, Km and Vm were estimated by Methods I and II with precision and were not significantly different from those obtained experimentally by direct estimation of E from the kinetic parameters of each enantiomer available in pure form.
3-D rigid body tracking using vision and depth sensors.
Gedik, O Serdar; Alatan, A Aydn
2013-10-01
In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.
RapidRIP quantifies the intracellular metabolome of 7 industrial strains of E. coli.
McCloskey, Douglas; Xu, Julia; Schrübbers, Lars; Christensen, Hanne B; Herrgård, Markus J
2018-04-25
Fast metabolite quantification methods are required for high throughput screening of microbial strains obtained by combinatorial or evolutionary engineering approaches. In this study, a rapid RIP-LC-MS/MS (RapidRIP) method for high-throughput quantitative metabolomics was developed and validated that was capable of quantifying 102 metabolites from central, amino acid, energy, nucleotide, and cofactor metabolism in less than 5 minutes. The method was shown to have comparable sensitivity and resolving capability as compared to a full length RIP-LC-MS/MS method (FullRIP). The RapidRIP method was used to quantify the metabolome of seven industrial strains of E. coli revealing significant differences in glycolytic, pentose phosphate, TCA cycle, amino acid, and energy and cofactor metabolites were found. These differences translated to statistically and biologically significant differences in thermodynamics of biochemical reactions between strains that could have implications when choosing a host for bioprocessing. Copyright © 2018. Published by Elsevier Inc.
[Analysis of the stability and adaptability of near infrared spectra qualitative analysis model].
Cao, Wu; Li, Wei-jun; Wang, Ping; Zhang, Li-ping
2014-06-01
The stability and adaptability of model of near infrared spectra qualitative analysis were studied. Method of separate modeling can significantly improve the stability and adaptability of model; but its ability of improving adaptability of model is limited. Method of joint modeling can not only improve the adaptability of the model, but also the stability of model, at the same time, compared to separate modeling, the method can shorten the modeling time, reduce the modeling workload; extend the term of validity of model, and improve the modeling efficiency. The experiment of model adaptability shows that, the correct recognition rate of separate modeling method is relatively low, which can not meet the requirements of application, and joint modeling method can reach the correct recognition rate of 90%, and significantly enhances the recognition effect. The experiment of model stability shows that, the identification results of model by joint modeling are better than the model by separate modeling, and has good application value.
Evaluation of Techniques for Measuring Microbial Hazards in Bathing Waters: A Comparative Study
Schang, Christelle; Henry, Rebekah; Kolotelo, Peter A.; Prosser, Toby; Crosbie, Nick; Grant, Trish; Cottam, Darren; O’Brien, Peter; Coutts, Scott; Deletic, Ana; McCarthy, David T.
2016-01-01
Recreational water quality is commonly monitored by means of culture based faecal indicator organism (FIOs) assays. However, these methods are costly and time-consuming; a serious disadvantage when combined with issues such as non-specificity and user bias. New culture and molecular methods have been developed to counter these drawbacks. This study compared industry-standard IDEXX methods (Colilert and Enterolert) with three alternative approaches: 1) TECTA™ system for E. coli and enterococci; 2) US EPA’s 1611 method (qPCR based enterococci enumeration); and 3) Next Generation Sequencing (NGS). Water samples (233) were collected from riverine, estuarine and marine environments over the 2014–2015 summer period and analysed by the four methods. The results demonstrated that E. coli and coliform densities, inferred by the IDEXX system, correlated strongly with the TECTA™ system. The TECTA™ system had further advantages in faster turnaround times (~12 hrs from sample receipt to result compared to 24 hrs); no staff time required for interpretation and less user bias (results are automatically calculated, compared to subjective colorimetric decisions). The US EPA Method 1611 qPCR method also showed significant correlation with the IDEXX enterococci method; but had significant disadvantages such as highly technical analysis and higher operational costs (330% of IDEXX). The NGS method demonstrated statistically significant correlations between IDEXX and the proportions of sequences belonging to FIOs, Enterobacteriaceae, and Enterococcaceae. While costs (3,000% of IDEXX) and analysis time (300% of IDEXX) were found to be significant drawbacks of NGS, rapid technological advances in this field will soon see it widely adopted. PMID:27213772
Cognitive Dysfunction in Patients with Renal Failure Requiring Hemodialysis
Thimmaiah, Rohini; Murthy, K. Krishna; Pinto, Denzil
2012-01-01
Background and Objectives: Renal failure patients show significant impairment on measures of attention and memory, and consistently perform significantly better on neuropsychological measures of memory and attention, approximately 24 hours after hemodialysis treatment. The objectives are to determine the cognitive dysfunction in patients with renal failure requiring hemodialysis. Materials and Methods: A total of 60 subjects comprising of 30 renal failure patients and 30 controls were recruited. The sample was matched for age, sex, and socioeconomic status. The tools used were the Standardized Mini-Mental State Examination and the Brief Cognitive Rating Scale. Results: The patients showed high cognitive dysfunction in the pre-dialysis group, in all the five dimensions (concentration, recent memory, past memory, orientation and functioning, and self-care), and the least in the 24-hour post dialysis group. This difference was found to be statistically significant (P=0.001). Conclusion: Patients with renal failure exhibited pronounced cognitive impairment and these functions significantly improved after the introduction of hemodialysis. PMID:23439613
Efficient full-chip SRAF placement using machine learning for best accuracy and improved consistency
NASA Astrophysics Data System (ADS)
Wang, Shibing; Baron, Stanislas; Kachwala, Nishrin; Kallingal, Chidam; Sun, Dezheng; Shu, Vincent; Fong, Weichun; Li, Zero; Elsaid, Ahmad; Gao, Jin-Wei; Su, Jing; Ser, Jung-Hoon; Zhang, Quan; Chen, Been-Der; Howell, Rafael; Hsu, Stephen; Luo, Larry; Zou, Yi; Zhang, Gary; Lu, Yen-Wen; Cao, Yu
2018-03-01
Various computational approaches from rule-based to model-based methods exist to place Sub-Resolution Assist Features (SRAF) in order to increase process window for lithography. Each method has its advantages and drawbacks, and typically requires the user to make a trade-off between time of development, accuracy, consistency and cycle time. Rule-based methods, used since the 90 nm node, require long development time and struggle to achieve good process window performance for complex patterns. Heuristically driven, their development is often iterative and involves significant engineering time from multiple disciplines (Litho, OPC and DTCO). Model-based approaches have been widely adopted since the 20 nm node. While the development of model-driven placement methods is relatively straightforward, they often become computationally expensive when high accuracy is required. Furthermore these methods tend to yield less consistent SRAFs due to the nature of the approach: they rely on a model which is sensitive to the pattern placement on the native simulation grid, and can be impacted by such related grid dependency effects. Those undesirable effects tend to become stronger when more iterations or complexity are needed in the algorithm to achieve required accuracy. ASML Brion has developed a new SRAF placement technique on the Tachyon platform that is assisted by machine learning and significantly improves the accuracy of full chip SRAF placement while keeping consistency and runtime under control. A Deep Convolutional Neural Network (DCNN) is trained using the target wafer layout and corresponding Continuous Transmission Mask (CTM) images. These CTM images have been fully optimized using the Tachyon inverse mask optimization engine. The neural network generated SRAF guidance map is then used to place SRAF on full-chip. This is different from our existing full-chip MB-SRAF approach which utilizes a SRAF guidance map (SGM) of mask sensitivity to improve the contrast of optical image at the target pattern edges. In this paper, we demonstrate that machine learning assisted SRAF placement can achieve a superior process window compared to the SGM model-based SRAF method, while keeping the full-chip runtime affordable, and maintain consistency of SRAF placement . We describe the current status of this machine learning assisted SRAF technique and demonstrate its application to full chip mask synthesis and discuss how it can extend the computational lithography roadmap.
NASA Technical Reports Server (NTRS)
Barker, L. E., Jr.; Bowles, R. L.; Williams, L. H.
1973-01-01
High angular rates encountered in real-time flight simulation problems may require a more stable and accurate integration method than the classical methods normally used. A study was made to develop a general local linearization procedure of integrating dynamic system equations when using a digital computer in real-time. The procedure is specifically applied to the integration of the quaternion rate equations. For this application, results are compared to a classical second-order method. The local linearization approach is shown to have desirable stability characteristics and gives significant improvement in accuracy over the classical second-order integration methods.
High strength air-dried aerogels
Coronado, Paul R.; Satcher, Jr., Joe H.
2012-11-06
A method for the preparation of high strength air-dried organic aerogels. The method involves the sol-gel polymerization of organic gel precursors, such as resorcinol with formaldehyde (RF) in aqueous solvents with R/C ratios greater than about 1000 and R/F ratios less than about 1:2.1. Using a procedure analogous to the preparation of resorcinol-formaldehyde (RF) aerogels, this approach generates wet gels that can be air dried at ambient temperatures and pressures. The method significantly reduces the time and/or energy required to produce a dried aerogel compared to conventional methods using either supercritical solvent extraction. The air dried gel exhibits typically less than 5% shrinkage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swamy, S.A.; Bhowmick, D.C.; Prager, D.E.
The regulatory requirements for postulated pipe ruptures have changed significantly since the first nuclear plants were designed. The Leak-Before-Break (LBB) methodology is now accepted as a technically justifiable approach for eliminating postulation of double-ended guillotine breaks (DEGB) in high energy piping systems. The previous pipe rupture design requirements for nuclear power plant applications are responsible for all the numerous and massive pipe whip restraints and jet shields installed for each plant. This results in significant plant congestion, increased labor costs and radiation dosage for normal maintenance and inspection. Also the restraints increase the probability of interference between the piping andmore » supporting structures during plant heatup, thereby potentially impacting overall plant reliability. The LBB approach to eliminate postulating ruptures in high energy piping systems is a significant improvement to former regulatory methodologies, and therefore, the LBB approach to design is gaining worldwide acceptance. However, the methods and criteria for LBB evaluation depend upon the policy of individual country and significant effort continues towards accomplishing uniformity on a global basis. In this paper the historical development of the U.S. LBB criteria will be traced and the results of an LBB evaluation for a typical Japanese PWR primary loop applying U.S. NRC approved methods will be presented. In addition, another approach using the Japanese LBB criteria will be shown and compared with the U.S. criteria. The comparison will be highlighted in this paper with detailed discussion.« less
Strotman, Lindsay N; Lin, Guangyun; Berry, Scott M; Johnson, Eric A; Beebe, David J
2012-09-07
Extraction and purification of DNA is a prerequisite to detection and analytical techniques. While DNA sample preparation methods have improved over the last few decades, current methods are still time consuming and labor intensive. Here we demonstrate a technology termed IFAST (Immiscible Filtration Assisted by Surface Tension), that relies on immiscible phase filtration to reduce the time and effort required to purify DNA. IFAST replaces the multiple wash and centrifugation steps required by traditional DNA sample preparation methods with a single step. To operate, DNA from lysed cells is bound to paramagnetic particles (PMPs) and drawn through an immiscible fluid phase barrier (i.e. oil) by an external handheld magnet. Purified DNA is then eluted from the PMPs. Here, detection of Clostridium botulinum type A (BoNT/A) in food matrices (milk, orange juice), a bioterrorism concern, was used as a model system to establish IFAST's utility in detection assays. Data validated that the DNA purified by IFAST was functional as a qPCR template to amplify the bont/A gene. The sensitivity limit of IFAST was comparable to the commercially available Invitrogen ChargeSwitch® method. Notably, pathogen detection via IFAST required only 8.5 μL of sample and was accomplished in five-fold less time. The simplicity, rapidity and portability of IFAST offer significant advantages when compared to existing DNA sample preparation methods.
Francis, Maureen D.; Wieland, Mark L.; Drake, Sean; Gwisdalla, Keri Lyn; Julian, Katherine A.; Nabors, Christopher; Pereira, Anne; Rosenblum, Michael; Smith, Amy; Sweet, David; Thomas, Kris; Varney, Andrew; Warm, Eric; Wininger, David; Francis, Mark L.
2015-01-01
Background Many internal medicine (IM) programs have reorganized their resident continuity clinics to improve trainees' ambulatory experience. Downstream effects on continuity of care and other clinical and educational metrics are unclear. Methods This multi-institutional, cross-sectional study included 713 IM residents from 12 programs. Continuity was measured using the usual provider of care method (UPC) and the continuity for physician method (PHY). Three clinic models (traditional, block, and combination) were compared using analysis of covariance. Multivariable linear regression analysis was used to analyze the effect of practice metrics and clinic model on continuity. Results UPC, reflecting continuity from the patient perspective, was significantly different, and was highest in the block model, midrange in combination model, and lowest in the traditional model programs. PHY, reflecting continuity from the perspective of the resident provider, was significantly lower in the block model than in combination and traditional programs. Panel size, ambulatory workload, utilization, number of clinics attended in the study period, and clinic model together accounted for 62% of the variation found in UPC and 26% of the variation found in PHY. Conclusions Clinic model appeared to have a significant effect on continuity measured from both the patient and resident perspectives. Continuity requires balance between provider availability and demand for services. Optimizing this balance to maximize resident education, and the health of the population served, will require consideration of relevant local factors and priorities in addition to the clinic model. PMID:26217420
Audiology practice management in South Africa: What audiologists know and what they should know
Kritzinger, Alta; Soer, Maggi
2015-01-01
Background In future, the South African Department of Health aims to purchase services from accredited private service providers. Successful private audiology practices can assist to address issues of access, equity and quality of health services. It is not sufficient to be an excellent clinician, since audiology practices are businesses that must also be managed effectively. Objective The objective was to determine the existing and required levels of practice management knowledge as perceived by South African audiologists. Method An electronic descriptive survey was used to investigate audiology practice management amongst South African audiologists. A total of 147 respondents completed the survey. Results were analysed by calculating descriptive statistics. The Z-proportional test was used to identify significant differences between existing and required levels of practice management knowledge. Results Significant differences were found between existing and required levels of knowledge regarding all eight practice management tasks, particularly legal and ethical issues and marketing and accounting. There were small differences in the knowledge required for practice management tasks amongst respondents working in public and private settings. Conclusion Irrespective of their work context, respondents showed that they need significant expansion of practice management knowledge in order to be successful, to compete effectively and to make sense of a complex marketplace. PMID:26809158
Nutritional requirements of sheep, goats and cattle in warm climates: a meta-analysis.
Salah, N; Sauvant, D; Archimède, H
2014-09-01
The objective of the study was to update energy and protein requirements of growing sheep, goats and cattle in warm areas through a meta-analysis study of 590 publications. Requirements were expressed on metabolic live weight (MLW=LW0.75) and LW1 basis. The maintenance requirements for energy were 542.64 and 631.26 kJ ME/kg LW0.75 for small ruminants and cattle, respectively, and the difference was significant (P<0.01). The corresponding requirement for 1 g gain was 24.3 kJ ME without any significant effect of species. Relative to LW0.75, there was no difference among genotypes intra-species in terms of ME requirement for maintenance and gain. However, small ruminants of warm and tropical climate appeared to have higher ME requirements for maintenance relative to live weight (LW) compared with temperate climate ones and cattle. Maintenance requirements for protein were estimated via two approaches. For these two methods, the data in which retained nitrogen (RN) was used cover the same range of variability of observations. The regression of digestible CP intake (DCPI, g/kg LW0.75) against RN (g/kg LW0.75) indicated that DCP requirements are significantly higher in sheep (3.36 g/kg LW0.75) than in goats (2.38 g/kg LW0.75), with cattle intermediate (2.81 g/kg LW0.75), without any significant difference in the quantity of DCPI/g retained CP (RCP) (40.43). Regressing metabolisable protein (MP) or minimal digestible protein in the intestine (PDImin) against RCP showed that there was no difference between species and genotypes, neither for the intercept (maintenance=3.51 g/kg LW0.75 for sheep and goat v. 4.35 for cattle) nor for the slope (growth=0.60 g MP/g RCP). The regression of DCP against ADG showed that DCP requirements did not differ among species or genotypes. These new feeding standards are derived from a wider range of nutritional conditions compared with existing feeding standards as they are based on a larger database. The standards seem to be more appropriate for ruminants in warm and tropical climates around the world.
Ji, Xing-jie; Cheng, Lin; Fang, Wen-song
2015-09-01
Based on the analysis of water requirement and water deficit during development stage of winter wheat in recent 30 years (1981-2010) in Henan Province, the effective precipitation was calculated using the U.S. Department of Agriculture Soil Conservation method, the water requirement (ETC) was estimated by using FAO Penman-Monteith equation and crop coefficient method recommended by FAO, combined with the climate change scenario A2 (concentration on the economic envelopment) and B2 ( concentration on the sustainable development) of Special Report on Emissions Scenarios (SRES) , the spatial and temporal characteristics of impacts of future climate change on effective precipitation, water requirement and water deficit of winter wheat were estimated. The climatic impact factors of ETc and WD also were analyzed. The results showed that under A2 and B2 scenarios, there would be a significant increase in anomaly percentage of effective precipitation, water requirement and water deficit of winter wheat during the whole growing period compared with the average value from 1981 to 2010. Effective precipitation increased the most in 2030s under A2 and B2 scenarios by 33.5% and 39.2%, respectively. Water requirement increased the most in 2010s under A2 and B2 scenarios by 22.5% and 17.5%, respectively, and showed a significant downward trend with time. Water deficit increased the most under A2 scenario in 2010s by 23.6% and under B2 scenario in 2020s by 13.0%. Partial correlation analysis indicated that solar radiation was the main cause for the variation of ETc and WD in future under A2 and B2 scenarios. The spatial distributions of effective precipitation, water requirement and water deficit of winter wheat during the whole growing period were spatially heterogeneous because of the difference in geographical and climatic environments. A possible tendency of water resource deficiency may exist in Henan Province in the future.
NASA Astrophysics Data System (ADS)
Miyajo, Akira; Hasegawa, Hideyuki
2018-07-01
At present, the speckle tracking method is widely used as a two- or three-dimensional (2D or 3D) motion estimator for the measurement of cardiovascular dynamics. However, this method requires high-level interpolation of a function, which evaluates the similarity between ultrasonic echo signals in two frames, to estimate a subsample small displacement in high-frame-rate ultrasound, which results in a high computational cost. To overcome this problem, a 2D motion estimator using the 2D Fourier transform, which does not require any interpolation process, was proposed by our group. In this study, we compared the accuracies of the speckle tracking method and our method using a 2D motion estimator, and applied the proposed method to the measurement of motion of a human carotid arterial wall. The bias error and standard deviation in the lateral velocity estimates obtained by the proposed method were 0.048 and 0.282 mm/s, respectively, which were significantly better than those (‑0.366 and 1.169 mm/s) obtained by the speckle tracking method. The calculation time of the proposed phase-sensitive method was 97% shorter than the speckle tracking method. Furthermore, the in vivo experimental results showed that a characteristic change in velocity around the carotid bifurcation could be detected by the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goffin, Mark A., E-mail: mark.a.goffin@gmail.com; Buchan, Andrew G.; Dargaville, Steven
2015-01-15
A method for applying goal-based adaptive methods to the angular resolution of the neutral particle transport equation is presented. The methods are applied to an octahedral wavelet discretisation of the spherical angular domain which allows for anisotropic resolution. The angular resolution is adapted across both the spatial and energy dimensions. The spatial domain is discretised using an inner-element sub-grid scale finite element method. The goal-based adaptive methods optimise the angular discretisation to minimise the error in a specific functional of the solution. The goal-based error estimators require the solution of an adjoint system to determine the importance to the specifiedmore » functional. The error estimators and the novel methods to calculate them are described. Several examples are presented to demonstrate the effectiveness of the methods. It is shown that the methods can significantly reduce the number of unknowns and computational time required to obtain a given error. The novelty of the work is the use of goal-based adaptive methods to obtain anisotropic resolution in the angular domain for solving the transport equation. -- Highlights: •Wavelet angular discretisation used to solve transport equation. •Adaptive method developed for the wavelet discretisation. •Anisotropic angular resolution demonstrated through the adaptive method. •Adaptive method provides improvements in computational efficiency.« less
Ultraviolet-C Irradiation: A Novel Pasteurization Method for Donor Human Milk.
Christen, Lukas; Lai, Ching Tat; Hartmann, Ben; Hartmann, Peter E; Geddes, Donna T
2013-01-01
Holder pasteurization (milk held at 62.5°C for 30 minutes) is the standard treatment method for donor human milk. Although this method of pasteurization is able to inactivate most bacteria, it also inactivates important bioactive components. Therefore, the objective of this study was to investigate ultraviolet irradiation as an alternative treatment method for donor human milk. Human milk samples were inoculated with five species of bacteria and then UV-C irradiated. Untreated and treated samples were analysed for bacterial content, bile salt stimulated lipase (BSSL) activity, alkaline phosphatase (ALP) activity, and fatty acid profile. All five species of bacteria reacted similarly to UV-C irradiation, with higher dosages being required with increasing concentrations of total solids in the human milk sample. The decimal reduction dosage was 289±17 and 945±164 J/l for total solids of 107 and 146 g/l, respectively. No significant changes in the fatty acid profile, BSSL activity or ALP activity were observed up to the dosage required for a 5-log10 reduction of the five species of bacteria. UV-C irradiation is capable of reducing vegetative bacteria in human milk to the requirements of milk bank guidelines with no loss of BSSL and ALP activity and no change of FA.
Yan, Xiaojuan; Yang, Feng; Zhou, Hanyun; Zhang, Hongshen; Liu, Jianfei; Ma, Kezhong; Li, Yi; Zhu, Jun; Ding, Jianqiang
2015-01-01
Background VKORC1 is reported to be capable of treating several diseases with thrombotic risk, such as cardiac valve replacement. Some single-nucleotide polymorphisms (SNPs) in VKORC1 are documented to be associated with clinical differences in warfarin maintenance dose. This study explored the correlations of VKORC1–1639 G/A, 1173 C/T and 497 T/G genetic polymorphisms with warfarin maintenance dose requirement in patients undergoing cardiac valve replacement. Material/Methods A total of 298 patients undergoing cardiac valve replacement were recruited. During follow-up, clinical data were recorded. Polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) method was applied to detect VKORC1–1639 G/A, 1173 C/T and 497 T/G polymorphisms, and genotypes were analyzed. Results Correlations between warfarin maintenance dose and baseline characteristics revealed statistical significances of age, gender and operation methods with warfarin maintenance dose (all P<0.05). Warfarin maintenance dose in VKORC1–1639 G/A AG + GG carriers was obviously higher than in AA carriers (P<0.001). As compared with patients with TT genotype in VKORC1 1173 C/T, warfarin maintenance dose was apparently higher in patients with CT genotype (P<0.001). Linear regression analysis revealed that gender, operation method, method for heart valve replacement, as well as VKORC1–1639 G/A and 1173 C/T gene polymorphisms were significantly related to warfarin maintenance dose (all P<0.05). Conclusions VKORC1 gene polymorphisms are key genetic factors to affect individual differences in warfarin maintenance dose in patients undergoing cardiac valve replacement; meanwhile, gender, operation method and method for heart valve replacement might also be correlate with warfarin maintenance dose. PMID:26583785
Evaluating significance in linear mixed-effects models in R.
Luke, Steven G
2017-08-01
Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.
Varicella Immunization Requirements for US Colleges: 2014–2015 academic year
Leung, Jessica; Marin, Mona; Leino, Victor; Even, Susan; Bialek, Stephanie R.
2017-01-01
Objective To obtain information on varicella pre-matriculation requirements in US colleges for undergraduate students during the 2014–2015 academic year. Participants Healthcare professionals and member-schools of the American College Health Association (ACHA). Methods An electronic survey was sent to ACHA members regarding school characteristics and whether schools had policies in place requiring that students show proof of 2-doses of varicella vaccination for school attendance. Results Only 27% (101/370) of schools had a varicella pre-matriculation requirement for undergraduate students. Only 68% of schools always enforced this requirement. Private schools, 4-year schools, Northeastern schools, those with <5,000 students, and schools located in a state with a 2-dose varicella vaccine mandate were significantly more likely to have a varicella pre-matriculation requirement. Conclusions A small proportion of US colleges have a varicella pre-matriculation requirement for varicella immunity. College vaccination requirements are an important tool for controlling varicella in these settings. PMID:26829449
Does the use of automated fetal biometry improve clinical work flow efficiency?
Espinoza, Jimmy; Good, Sara; Russell, Evie; Lee, Wesley
2013-05-01
This study was designed to compare the work flow efficiency of manual measurements of 5 fetal parameters with a novel technique that automatically measures these parameters from 2-dimensional sonograms. This prospective study included 200 singleton pregnancies between 15 and 40 weeks' gestation. Patients were randomly allocated to either manual (n = 100) or automatic (n = 100) fetal biometry. The automatic measurement was performed using a commercially available software application. A digital video recorder captured all on-screen activity associated with the sonographic examination. The examination time and number of steps required to obtain fetal measurements were compared between manual and automatic methods. The mean time required to obtain the biometric measurements was significantly shorter using the automated technique than the manual approach (P < .001 for all comparisons). Similarly, the mean number of steps required to perform these measurements was significantly fewer with automatic measurements compared to the manual technique (P < .001). In summary, automated biometry reduced the examination time required for standard fetal measurements. This approach may improve work flow efficiency in busy obstetric sonography practices.
Adolescent Immunization Coverage and Implementation of New School Requirements in Michigan, 2010
DeVita, Stefanie F.; Vranesich, Patricia A.; Boulton, Matthew L.
2014-01-01
Objectives. We examined the effect of Michigan’s new school rules and vaccine coadministration on time to completion of all the school-required vaccine series, the individual adolescent vaccines newly required for sixth grade in 2010, and initiation of the human papillomavirus (HPV) vaccine series, which was recommended but not required for girls. Methods. Data were derived from the Michigan Care Improvement Registry, a statewide Immunization Information System. We assessed the immunization status of Michigan children enrolled in sixth grade in 2009 or 2010. We used univariable and multivariable Cox regression models to identify significant associations between each factor and school completeness. Results. Enrollment in sixth grade in 2010 and coadministration of adolescent vaccines at the first adolescent visit were significantly associated with completion of the vaccines required for Michigan’s sixth graders. Children enrolled in sixth grade in 2010 had higher coverage with the newly required adolescent vaccines by age 13 years than did sixth graders in 2009, but there was little difference in the rate of HPV vaccine initiation among girls. Conclusions. Education and outreach efforts, particularly regarding the importance and benefits of coadministration of all recommended vaccines in adolescents, should be directed toward health care providers, parents, and adolescents. PMID:24922144
Partition of unity finite element method for quantum mechanical materials calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pask, J. E.; Sukumar, N.
The current state of the art for large-scale quantum-mechanical simulations is the planewave (PW) pseudopotential method, as implemented in codes such as VASP, ABINIT, and many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires significant nonlocal communications, which limit parallel efficiency. Real-space methods such as finite-differences (FD) and finite-elements (FE) have partially addressed both resolution and parallel-communications issues but have been plagued by one key disadvantage relative tomore » PW: excessive number of degrees of freedom (basis functions) needed to achieve the required accuracies. In this paper, we present a real-space partition of unity finite element (PUFE) method to solve the Kohn–Sham equations of density functional theory. In the PUFE method, we build the known atomic physics into the solution process using partition-of-unity enrichment techniques in finite element analysis. The method developed herein is completely general, applicable to metals and insulators alike, and particularly efficient for deep, localized potentials, as occur in calculations at extreme conditions of pressure and temperature. Full self-consistent Kohn–Sham calculations are presented for LiH, involving light atoms, and CeAl, involving heavy atoms with large numbers of atomic-orbital enrichments. We find that the new PUFE approach attains the required accuracies with substantially fewer degrees of freedom, typically by an order of magnitude or more, than the PW method. As a result, we compute the equation of state of LiH and show that the computed lattice constant and bulk modulus are in excellent agreement with reference PW results, while requiring an order of magnitude fewer degrees of freedom to obtain.« less
Partition of unity finite element method for quantum mechanical materials calculations
Pask, J. E.; Sukumar, N.
2016-11-09
The current state of the art for large-scale quantum-mechanical simulations is the planewave (PW) pseudopotential method, as implemented in codes such as VASP, ABINIT, and many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires significant nonlocal communications, which limit parallel efficiency. Real-space methods such as finite-differences (FD) and finite-elements (FE) have partially addressed both resolution and parallel-communications issues but have been plagued by one key disadvantage relative tomore » PW: excessive number of degrees of freedom (basis functions) needed to achieve the required accuracies. In this paper, we present a real-space partition of unity finite element (PUFE) method to solve the Kohn–Sham equations of density functional theory. In the PUFE method, we build the known atomic physics into the solution process using partition-of-unity enrichment techniques in finite element analysis. The method developed herein is completely general, applicable to metals and insulators alike, and particularly efficient for deep, localized potentials, as occur in calculations at extreme conditions of pressure and temperature. Full self-consistent Kohn–Sham calculations are presented for LiH, involving light atoms, and CeAl, involving heavy atoms with large numbers of atomic-orbital enrichments. We find that the new PUFE approach attains the required accuracies with substantially fewer degrees of freedom, typically by an order of magnitude or more, than the PW method. As a result, we compute the equation of state of LiH and show that the computed lattice constant and bulk modulus are in excellent agreement with reference PW results, while requiring an order of magnitude fewer degrees of freedom to obtain.« less
Fishman, M. J.
1993-01-01
Methods to be used to analyze samples of water, suspended sediment and bottom material for their content of inorganic and organic constituents are presented. Technology continually changes, and so this laboratory manual includes new and revised methods for determining the concentration of dissolved constituents in water, whole water recoverable constituents in water-suspended sediment samples, and recoverable concentration of constit- uents in bottom material. For each method, the general topics covered are the application, the principle of the method, interferences, the apparatus and reagents required, a detailed description of the analytical procedure, reporting results, units and significant figures, and analytical precision data. Included in this manual are 30 methods.
Implicit Shape Models for Object Detection in 3d Point Clouds
NASA Astrophysics Data System (ADS)
Velizhev, A.; Shapovalov, R.; Schindler, K.
2012-07-01
We present a method for automatic object localization and recognition in 3D point clouds representing outdoor urban scenes. The method is based on the implicit shape models (ISM) framework, which recognizes objects by voting for their center locations. It requires only few training examples per class, which is an important property for practical use. We also introduce and evaluate an improved version of the spin image descriptor, more robust to point density variation and uncertainty in normal direction estimation. Our experiments reveal a significant impact of these modifications on the recognition performance. We compare our results against the state-of-the-art method and get significant improvement in both precision and recall on the Ohio dataset, consisting of combined aerial and terrestrial LiDAR scans of 150,000 m2 of urban area in total.
Nursing students' mathematic calculation skills.
Rainboth, Lynde; DeMasi, Chris
2006-12-01
This mixed method study used a pre-test/post-test design to evaluate the efficacy of a teaching strategy in improving beginning nursing student learning outcomes. During a 4-week student teaching period, a convenience sample of 54 sophomore level nursing students were required to complete calculation assignments, taught one calculation method, and mandated to attend medication calculation classes. These students completed pre- and post-math tests and a major medication mathematic exam. Scores from the intervention student group were compared to those achieved by the previous sophomore class. Results demonstrated a statistically significant improvement from pre- to post-test and the students who received the intervention had statistically significantly higher scores on the major medication calculation exam than did the students in the control group. The evaluation completed by the intervention group showed that the students were satisfied with the method and outcome.
History by history statistical estimators in the BEAM code system.
Walters, B R B; Kawrakow, I; Rogers, D W O
2002-12-01
A history by history method for estimating uncertainties has been implemented in the BEAMnrc and DOSXYznrc codes replacing the method of statistical batches. This method groups scored quantities (e.g., dose) by primary history. When phase-space sources are used, this method groups incident particles according to the primary histories that generated them. This necessitated adding markers (negative energy) to phase-space files to indicate the first particle generated by a new primary history. The new method greatly reduces the uncertainty in the uncertainty estimate. The new method eliminates one dimension (which kept the results for each batch) from all scoring arrays, resulting in memory requirement being decreased by a factor of 2. Correlations between particles in phase-space sources are taken into account. The only correlations with any significant impact on uncertainty are those introduced by particle recycling. Failure to account for these correlations can result in a significant underestimate of the uncertainty. The previous method of accounting for correlations due to recycling by placing all recycled particles in the same batch did work. Neither the new method nor the batch method take into account correlations between incident particles when a phase-space source is restarted so one must avoid restarts.
Statistical Methods Applied to Gamma-ray Spectroscopy Algorithms in Nuclear Security Missions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fagan, Deborah K.; Robinson, Sean M.; Runkle, Robert C.
2012-10-01
In a wide range of nuclear security missions, gamma-ray spectroscopy is a critical research and development priority. One particularly relevant challenge is the interdiction of special nuclear material for which gamma-ray spectroscopy supports the goals of detecting and identifying gamma-ray sources. This manuscript examines the existing set of spectroscopy methods, attempts to categorize them by the statistical methods on which they rely, and identifies methods that have yet to be considered. Our examination shows that current methods effectively estimate the effect of counting uncertainty but in many cases do not address larger sources of decision uncertainty—ones that are significantly moremore » complex. We thus explore the premise that significantly improving algorithm performance requires greater coupling between the problem physics that drives data acquisition and statistical methods that analyze such data. Untapped statistical methods, such as Bayes Modeling Averaging and hierarchical and empirical Bayes methods have the potential to reduce decision uncertainty by more rigorously and comprehensively incorporating all sources of uncertainty. We expect that application of such methods will demonstrate progress in meeting the needs of nuclear security missions by improving on the existing numerical infrastructure for which these analyses have not been conducted.« less
Marshall, Leisa L; Nykamp, Diane L; Momary, Kathryn M
2014-12-15
To compare the impact of 2 different teaching and learning methods on student mastery of learning objectives in a pharmacotherapy module in the large classroom setting. Two teaching and learning methods were implemented and compared in a required pharmacotherapy module for 2 years. The first year, multiple interactive mini-cases with inclass individual assessment and an abbreviated lecture were used to teach osteoarthritis; a traditional lecture with 1 inclass case discussion was used to teach gout. In the second year, the same topics were used but the methods were flipped. Student performance on pre/post individual readiness assessment tests (iRATs), case questions, and subsequent examinations were compared each year by the teaching and learning method and then between years by topic for each method. Students also voluntarily completed a 20-item evaluation of the teaching and learning methods. Postpresentation iRATs were significantly higher than prepresentation iRATs for each topic each year with the interactive mini-cases; there was no significant difference in iRATs before and after traditional lecture. For osteoarthritis, postpresentation iRATs after interactive mini-cases in year 1 were significantly higher than postpresentation iRATs after traditional lecture in year 2; the difference in iRATs for gout per learning method was not significant. The difference between examination performance for osteoarthritis and gout was not significant when the teaching and learning methods were compared. On the student evaluations, 2 items were significant both years when answers were compared by teaching and learning method. Each year, students ranked their class participation higher with interactive cases than with traditional lecture, but both years they reported enjoying the traditional lecture format more. Multiple interactive mini-cases with an abbreviated lecture improved immediate mastery of learning objectives compared to a traditional lecture format, regardless of therapeutic topic, but did not improve student performance on subsequent examinations.
Qian, Cheng; Kovalchik, Kevin A; MacLennan, Matthew S; Huang, Xiaohua; Chen, David D Y
2017-06-01
Capillary electrophoresis frontal analysis (CE-FA) can be used to determine binding affinity of molecular interactions. However, its current data processing method mandate specific requirement on the mobilities of the binding pair in order to obtain accurate binding constants. This work shows that significant errors are resulted when the mobilities of the interacting species do not meet these requirements. Therefore, the applicability of CE-FA in many real word applications becomes questionable. An electrophoretic mobility-based correction method is developed in this work based on the flux of each species. A simulation program and a pair of model compounds are used to verify the new equations and evaluate the effectiveness of this method. Ibuprofen and hydroxypropyl-β-cyclodextrinare used to demonstrate the differences in the obtained binding constant by CE-FA when different calculation methods are used, and the results are compared with those obtained by affinity capillary electrophoresis (ACE). The results suggest that CE-FA, with the mobility-based correction method, can be a generally applicable method for a much wider range of applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Feature Extraction with GMDH-Type Neural Networks for EEG-Based Person Identification.
Schetinin, Vitaly; Jakaite, Livija; Nyah, Ndifreke; Novakovic, Dusica; Krzanowski, Wojtek
2018-08-01
The brain activity observed on EEG electrodes is influenced by volume conduction and functional connectivity of a person performing a task. When the task is a biometric test the EEG signals represent the unique "brain print", which is defined by the functional connectivity that is represented by the interactions between electrodes, whilst the conduction components cause trivial correlations. Orthogonalization using autoregressive modeling minimizes the conduction components, and then the residuals are related to features correlated with the functional connectivity. However, the orthogonalization can be unreliable for high-dimensional EEG data. We have found that the dimensionality can be significantly reduced if the baselines required for estimating the residuals can be modeled by using relevant electrodes. In our approach, the required models are learnt by a Group Method of Data Handling (GMDH) algorithm which we have made capable of discovering reliable models from multidimensional EEG data. In our experiments on the EEG-MMI benchmark data which include 109 participants, the proposed method has correctly identified all the subjects and provided a statistically significant ([Formula: see text]) improvement of the identification accuracy. The experiments have shown that the proposed GMDH method can learn new features from multi-electrode EEG data, which are capable to improve the accuracy of biometric identification.
The rank correlated SLW model of gas radiation in non-uniform media
NASA Astrophysics Data System (ADS)
Solovjov, Vladimir P.; Andre, Frederic; Lemonnier, Denis; Webb, Brent W.
2017-08-01
A comprehensive theoretical development of possible reference approaches in modelling of radiation transfer in non-uniform gaseous media is developed within the framework of the Generalized SLW Model. The notion of absorption spectrum ;correlation; adopted currently for global methods in gas radiation is critically revisited and replaced by a less restrictive concept of rank correlated spectrum. Within this framework it is shown that eight different reference approaches are possible, of which only three have been reported in the literature. Among the approaches presented is a novel Rank Correlated SLW Model, which is distinguished by the fact that i) it does not require the specification of a reference gas thermodynamic state, and ii) it preserves the emission term in the spectrally integrated Radiative Transfer Equation. Construction of this reference model requires only two absorption line blackbody distribution functions, and subdivision into gray gases can be performed using standard quadratures. Consequently, this new reference approach appears to have significant advantages over all other methods, and is, in general, a significant improvement in the global modelling of gas radiation. All reference approaches are summarized in the present work, and their use in radiative transfer prediction is demonstrated for simple example cases. Further, a detailed rigorous theoretical development of the improved methods is provided.
Estimating forestland area change from inventory data
Paul Van Deusen; Francis Roesch; Thomas Wigley
2013-01-01
Simple methods for estimating the proportion of land changing from forest to nonforest are developed. Variance estimators are derived to facilitate significance tests. A power analysis indicates that 400 inventory plots are required to reliably detect small changes in net or gross forest loss. This is an important result because forest certification programs may...
Impacts of U.S. Export Control Policies on Science and Technology Activities and Competitiveness
2009-02-25
coffee table. However, under the current export control regime, the stand was considered ‘ITAR hardware’ and we were required to have two security...should survive without an effective method for pruning items from the control lists when they no longer serve a significant definable national
Collective Bargaining in Catholic Schools: What Does Governance Have to Do with It?
ERIC Educational Resources Information Center
James, John T.
2004-01-01
This article outlines the significant legal decisions regarding collective bargaining in Catholic schools, identifies the governance structures employed in Catholic schools and the methods of translating these governance structures into documents required by civil law, and concludes with the citation of two recent court decisions that demonstrate…
Surgical management of macroglossia secondary to amyloidosis.
Gadiwalla, Yusuf; Burnham, Richard; Warfield, Adrian; Praveen, Prav
2016-04-11
The authors report a case of amyloidosis-induced macroglossia treated with surgical reduction of the tongue using a keyhole to inverted T method with particular emphasis on the postoperative sequelae. Significant tongue swelling persisted for longer than anticipated requiring tracheostomy to remain in situ for 14 days. 2016 BMJ Publishing Group Ltd.
Putnam, Joel G.; Nelson, Justine; Leis, Eric M; Erickson, Richard A.; Hubert, Terrance D.; Amberg, Jon J.
2017-01-01
Conservation biology often requires the control of invasive species. One method is the development and use of biocides. Identifying new chemicals as part of the biocide registration approval process can require screening millions of compounds. Traditionally, screening new chemicals has been done in vivo using test organisms. Using in vitro (e.g., cell lines) and in silico (e.g., computer models) methods decrease test organism requirements and increase screening speed and efficiency. These methods, however, would be greatly improved by better understanding how individual fish species metabolize selected compounds.We combined cell assays and metabolomics to create a powerful tool to facilitate the identification of new control chemicals. Specifically, we exposed cell lines established from bighead carp and silver carp larvae to thiram (7 concentrations) then completed metabolite profiling to assess the dose-response of the bighead carp and silver carp metabolome to thiram. Forty one of the 700 metabolomic markers identified in bighead carp exhibited a dose-response to thiram exposure compared to silver carp in which 205 of 1590 metabolomic markers exhibited a dose-response. Additionally, we identified 11 statistically significant metabolomic markers based upon volcano plot analysis common between both species. This smaller subset of metabolites formed a thiram-specific metabolomic fingerprint which allowed for the creation of a toxicant specific, rather than a species-specific, metabolomic fingerprint. Metabolomic fingerprints may be used in biocide development and improve our understanding of ecologically significant events, such as mass fish kills.
Method for Reducing the Refresh Rate of Fiber Bragg Grating Sensors
NASA Technical Reports Server (NTRS)
Parker, Allen R., Jr. (Inventor)
2014-01-01
The invention provides a method of obtaining the FBG data in final form (transforming the raw data into frequency and location data) by taking the raw FBG sensor data and dividing the data into a plurality of segments over time. By transforming the raw data into a plurality of smaller segments, processing time is significantly decreased. Also, by defining the segments over time, only one processing step is required. By employing this method, the refresh rate of FBG sensor systems can be improved from about 1 scan per second to over 20 scans per second.
Sample preparation of metal alloys by electric discharge machining
NASA Technical Reports Server (NTRS)
Chapman, G. B., II; Gordon, W. A.
1976-01-01
Electric discharge machining was investigated as a noncontaminating method of comminuting alloys for subsequent chemical analysis. Particulate dispersions in water were produced from bulk alloys at a rate of about 5 mg/min by using a commercially available machining instrument. The utility of this approach was demonstrated by results obtained when acidified dispersions were substituted for true acid solutions in an established spectrochemical method. The analysis results were not significantly different for the two sample forms. Particle size measurements and preliminary results from other spectrochemical methods which require direct aspiration of liquid into flame or plasma sources are reported.
NASA Astrophysics Data System (ADS)
Chen, Ming-Chih; Hsiao, Shen-Fu
In this paper, we propose an area-efficient design of Advanced Encryption Standard (AES) processor by applying a new common-expression-elimination (CSE) method to the sub-functions of various transformations required in AES. The proposed method reduces the area cost of realizing the sub-functions by extracting the common factors in the bit-level XOR/AND-based sum-of-product expressions of these sub-functions using a new CSE algorithm. Cell-based implementation results show that the AES processor with our proposed CSE method has significant area improvement compared with previous designs.
Input reconstruction of chaos sensors.
Yu, Dongchuan; Liu, Fang; Lai, Pik-Yin
2008-06-01
Although the sensitivity of sensors can be significantly enhanced using chaotic dynamics due to its extremely sensitive dependence on initial conditions and parameters, how to reconstruct the measured signal from the distorted sensor response becomes challenging. In this paper we suggest an effective method to reconstruct the measured signal from the distorted (chaotic) response of chaos sensors. This measurement signal reconstruction method applies the neural network techniques for system structure identification and therefore does not require the precise information of the sensor's dynamics. We discuss also how to improve the robustness of reconstruction. Some examples are presented to illustrate the measurement signal reconstruction method suggested.
Predictors of no-scalpel vasectomy acceptance in Karimnagar district, Andhra Pradesh
Valsangkar, Sameer; Sai, Surendranath K.; Bele, Samir D.; Bodhare, Trupti N.
2012-01-01
Introduction: Karimnagar District has consistently achieved highest rates of no-scalpel vasectomy (NSV) in the past decade when compared to state and national rates. This study was conducted to elucidate the underlying causes for higher acceptance of NSV in the district. Materials and Methods: A community-based, case control study was conducted. Sampling techniques used were purposive and simple random sampling. A semi-structured questionnaire was used to evaluate the socio-demographic, family characteristics, contraceptive history and predictors of contraceptive choice in 116 NSV acceptors and 120 other contraceptive users (OCUs). Postoperative complications and experiences were ascertained in NSV acceptors. Results: Age (χ2=11.79, P value = 0.008), literacy (χ2=17.95, P value = 0.03), duration of marriage (χ2=14.23, P value = 0.008) and number of children (χ2=10.45, P value = 0.01) were significant for acceptance of NSV. Among the predictors, method suggested by peer/ health worker (OR = 1.5, P value = 0.01), method does not require regular intervention (OR = 1.3, P value = 0.004) and permanence of the method (OR = 1.2, P value = 0.031) were significant. Acceptors were most satisfied with the shorter duration required to return to work and the most common complication was persistent postoperative pain among 12 (10.34%) of the acceptors. Conclusion: Advocating and implementing family planning is of high significance in view of the population growth in India and drawing from the demographic profile, predictors, pool of trainers and experiences in Karimnagar District, a similar achievement of higher rates of this simple procedure with few complications can be replicated. PMID:23204657
2013-06-01
density of the s5 and s3 metastable states for different discharge parameters. The absorption data was fit to an approximated Voigt profile from which...pressures are required in order to have enough spin-orbit relaxation to maintain CW lasing without significant bottlenecking. There are many methods to...for just that [(5),(12)]. This method allows for a wide study of energy levels since the limiting factor is the sensitivity of the detector and modern
Design of spur gears for improved efficiency
NASA Technical Reports Server (NTRS)
Anderson, N. E.; Loewenthal, S. H.
1981-01-01
A method to calculate spur gear system power loss for a wide range of gear geometries and operating conditions is used to determine design requirements for an efficient gearset. The effects of spur gear size, pitch, ratio, pitch-line-velocity and load on efficiency are shown. A design example is given to illustrate how the method is to be applied. In general, peak efficiencies were found to be greater for larger diameter and fine pitched gears and tare (no-load) losses were found to be significant.
Nagahama, Yuki; Shimobaba, Tomoyoshi; Kakue, Takashi; Masuda, Nobuyuki; Ito, Tomoyoshi
2017-05-01
A holographic projector utilizes holography techniques. However, there are several barriers to realizing holographic projections. One is deterioration of hologram image quality caused by speckle noise and ringing artifacts. The combination of the random phase-free method and the Gerchberg-Saxton (GS) algorithm has improved the image quality of holograms. However, the GS algorithm requires significant computation time. We propose faster methods for image quality improvement of random phase-free holograms using the characteristics of ringing artifacts.
Data compression strategies for ptychographic diffraction imaging
NASA Astrophysics Data System (ADS)
Loetgering, Lars; Rose, Max; Treffer, David; Vartanyants, Ivan A.; Rosenhahn, Axel; Wilhein, Thomas
2017-12-01
Ptychography is a computational imaging method for solving inverse scattering problems. To date, the high amount of redundancy present in ptychographic data sets requires computer memory that is orders of magnitude larger than the retrieved information. Here, we propose and compare data compression strategies that significantly reduce the amount of data required for wavefield inversion. Information metrics are used to measure the amount of data redundancy present in ptychographic data. Experimental results demonstrate the technique to be memory efficient and stable in the presence of systematic errors such as partial coherence and noise.
Technology for return of planetary samples
NASA Technical Reports Server (NTRS)
1975-01-01
Technological requirements of a planetary return sample mission were studied. The state-of-the-art for problems unique to this class of missions was assessed and technological gaps were identified. The problem areas where significant advancement of the state-of-the-art is required are: life support for the exobiota during the return trip and within the Planetary Receiving Laboratory (PRL); biohazard assessment and control technology; and quarantine qualified handling and experimentation methods and equipment for studying the returned sample in the PRL. Concepts for solving these problems are discussed.
Quentin, Michael; Blondin, Dirk; Arsov, Christian; Schimmöller, Lars; Hiester, Andreas; Godehardt, Erhard; Albers, Peter; Antoch, Gerald; Rabenalt, Robert
2014-11-01
Magnetic resonance imaging guided biopsy is increasingly performed to diagnose prostate cancer. However, there is a lack of well controlled, prospective trials to support this treatment method. We prospectively compared magnetic resonance imaging guided in-bore biopsy with standard systematic transrectal ultrasound guided biopsy in biopsy naïve men with increased prostate specific antigen. We performed a prospective study in 132 biopsy naïve men with increased prostate specific antigen (greater than 4 ng/ml). After 3 Tesla functional multiparametric magnetic resonance imaging patients were referred for magnetic resonance imaging guided in-bore biopsy of prostate lesions (maximum 3) followed by standard systematic transrectal ultrasound guided biopsy (12 cores). We analyzed the detection rates of prostate cancer and significant prostate cancer (greater than 5 mm total cancer length or any Gleason pattern greater than 3). A total of 128 patients with a mean ± SD age of 66.1 ± 8.1 years met all study requirements. Median prostate specific antigen was 6.7 ng/ml (IQR 5.1-9.0). Transrectal ultrasound and magnetic resonance imaging guided biopsies provided the same 53.1% detection rate, including 79.4% and 85.3%, respectively, for significant prostate cancer. Magnetic resonance imaging and transrectal ultrasound guided biopsies missed 7.8% and 9.4% of clinically significant prostate cancers, respectively. Magnetic resonance imaging biopsy required significantly fewer cores and revealed a higher percent of cancer involvement per biopsy core (each p <0.01). Combining the 2 methods provided a 60.9% detection rate with an 82.1% rate for significant prostate cancer. Magnetic resonance imaging guided in-bore and systematic transrectal ultrasound guided biopsies achieved equally high detection rates in biopsy naïve patients with increased prostate specific antigen. Magnetic resonance imaging guided in-bore biopsies required significantly fewer cores and revealed a significantly higher percent of cancer involvement per biopsy core. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Coping with a MEDLIB-L service outage
Brown, Christine D.; MacCall, Steven
2001-01-01
Objective: The study assessed the coping strategies of MEDLIB-L subscribers during an unexpected disruption in the list's service. Methods: An online survey of MEDLIB-L subscribers was performed following a six-day service outage in August 1999. Results: Respondents' information needs resulted in two distinct coping strategies. Subscribers without a recognized information need or an information need determined to be not pressing coped by waiting out the interruption. Subscribers with pressing information needs turned to alternative methods of resolving these needs. Conclusions: While most respondents missed the list and the assistance that it provided, many did not feel that the outage required significant coping strategies. The outage was viewed as a “minor stressor” and did not require secondary-level assessment of the availability and suitability of alternative resources. PMID:11837260
Strategies for efficient resolution analysis in full-waveform inversion
NASA Astrophysics Data System (ADS)
Fichtner, A.; van Leeuwen, T.; Trampert, J.
2016-12-01
Full-waveform inversion is developing into a standard method in the seismological toolbox. It combines numerical wave propagation for heterogeneous media with adjoint techniques in order to improve tomographic resolution. However, resolution becomes increasingly difficult to quantify because of the enormous computational requirements. Here we present two families of methods that can be used for efficient resolution analysis in full-waveform inversion. They are based on the targeted extraction of resolution proxies from the Hessian matrix, which is too large to store and to compute explicitly. Fourier methods rest on the application of the Hessian to Earth models with harmonic oscillations. This yields the Fourier spectrum of the Hessian for few selected wave numbers, from which we can extract properties of the tomographic point-spread function for any point in space. Random probing methods use uncorrelated, random test models instead of harmonic oscillations. Auto-correlating the Hessian-model applications for sufficiently many test models also characterises the point-spread function. Both Fourier and random probing methods provide a rich collection of resolution proxies. These include position- and direction-dependent resolution lengths, and the volume of point-spread functions as indicator of amplitude recovery and inter-parameter trade-offs. The computational requirements of these methods are equivalent to approximately 7 conjugate-gradient iterations in full-waveform inversion. This is significantly less than the optimisation itself, which may require tens to hundreds of iterations to reach convergence. In addition to the theoretical foundations of the Fourier and random probing methods, we show various illustrative examples from real-data full-waveform inversion for crustal and mantle structure.
Reducing Sensor Noise in MEG and EEG Recordings Using Oversampled Temporal Projection.
Larson, Eric; Taulu, Samu
2018-05-01
Here, we review the theory of suppression of spatially uncorrelated, sensor-specific noise in electro- and magentoencephalography (EEG and MEG) arrays, and introduce a novel method for suppression. Our method requires only that the signals of interest are spatially oversampled, which is a reasonable assumption for many EEG and MEG systems. Our method is based on a leave-one-out procedure using overlapping temporal windows in a mathematical framework to project spatially uncorrelated noise in the temporal domain. This method, termed "oversampled temporal projection" (OTP), has four advantages over existing methods. First, sparse channel-specific artifacts are suppressed while limiting mixing with other channels, whereas existing linear, time-invariant spatial operators can spread such artifacts to other channels with a spatial distribution which can be mistaken for one produced by an electrophysiological source. Second, OTP minimizes distortion of the spatial configuration of the data. During source localization (e.g., dipole fitting), many spatial methods require corresponding modification of the forward model to avoid bias, while OTP does not. Third, noise suppression factors at the sensor level are maintained during source localization, whereas bias compensation removes the denoising benefit for spatial methods that require such compensation. Fourth, OTP uses a time-window duration parameter to control the tradeoff between noise suppression and adaptation to time-varying sensor characteristics. OTP efficiently optimizes noise suppression performance while controlling for spatial bias of the signal of interest. This is important in applications where sensor noise significantly limits the signal-to-noise ratio, such as high-frequency brain oscillations.
Tiryaki, Osman
2016-10-02
This study was undertaken to validate the "quick, easy, cheap, effective, rugged and safe" (QuEChERS) method using Golden Delicious and Starking Delicious apple matrices spiked at 0.1 maximum residue limit (MRL), 1.0 MRL and 10 MRL levels of the four pesticides (chlorpyrifos, dimethoate, indoxacarb and imidacloprid). For the extraction and cleanup, original QuEChERS method was followed, then the samples were subjected to liquid chromatography-triple quadrupole mass spectrometry (LC-MS/MS) for chromatographic analyses. According to t test, matrix effect was not significant for chlorpyrifos in both sample matrices, but it was significant for dimethoate, indoxacarb and imidacloprid in both sample matrices. Thus, matrix-matched calibration (MC) was used to compensate matrix effect and quantifications were carried out by using MC. The overall recovery of the method was 90.15% with a relative standard deviation of 13.27% (n = 330). Estimated method detection limit of analytes blew the MRLs. Some other parameters of the method validation, such as recovery, precision, accuracy and linearity were found to be within the required ranges.
Comparison of six methods for isolating mycobacteria from swine lymph nodes.
Thoen, C O; Richards, W D; Jarnagin, J L
1974-03-01
Six laboratory methods were compared for isolating acid-fast bacteria. Tuberculous lymph nodes from each of 48 swine as identified by federal meat inspectors were processed by each of the methods. Treated tissue suspensions were inoculated onto each of eight media which were observed at 7-day intervals for 9 weeks. There were no statistically significant differences between the number of Mycobacterium avium complex bacteria isolated by each of the six methods. Rapid tissue preparation methods involving treatment with 2% sodium hydroxide or treatment with 0.2% zephiran required only one-third to one-fourth the processing time as a standard method. There were small differences in the amount of contamination among the six methods, but no detectable differences in the time of first appearance of M. avium complex colonies.
Shape control of structures with semi-definite stiffness matrices for adaptive wings
NASA Astrophysics Data System (ADS)
Austin, Fred; Van Nostrand, William C.; Rossi, Michael J.
1993-09-01
Maintaining an optimum-wing cross section during transonic cruise can dramatically reduce the shock-induced drag and can result in significant fuel savings and increased range. Our adaptive-wing concept employs actuators as truss elements of active ribs to reshape the wing cross section by deforming the structure. In our previous work, to derive the shape control- system gain matrix, we developed a procedure that requires the inverse of the stiffness matrix of the structure without the actuators. However, this method cannot be applied to designs where the actuators are required structural elements since the stiffness matrices are singular when the actuator are removed. Consequently, a new method was developed, where the order of the problem is reduced and only the inverse of a small nonsingular partition of the stiffness matrix is required to obtain the desired gain matrix. The procedure was experimentally validated by achieving desired shapes of a physical model of an aircraft-wing rib. The theory and test results are presented.
Strategies and Considerations for Distributing and Recovering Mouse Lines
Du, Yubin; Xie, Wen; Liu, Chengyu
2012-01-01
As more and more genetically modified mouse lines are being generated, it becomes increasingly common to share animal models among different research institutions. Live mice are routinely transferred between animal facilities. Due to various issues concerning animal welfare, intellectual property rights, colony health status and biohazard, significant paperwork and coordination are required before any animal travel can take place. Shipping fresh or frozen preimplantation embryos, gametes, or reproductive organs can bypass some of the issues associated with live animal transfer, but it requires the receiving facilities to be able to perform delicate and sometimes intricate procedures such as embryo transfer, in vitro fertilization (IVF), or ovary transplantation. Here, we summarize the general requirements for live animal transport and review some of the assisted reproductive technologies (ART) that can be applied to shipping and reviving mouse lines. Intended users of these methods should consult their institution’s responsible official to find out whether each specific method is legal or appropriate in their own animal facilities. PMID:20691859
Tensor-GMRES method for large sparse systems of nonlinear equations
NASA Technical Reports Server (NTRS)
Feng, Dan; Pulliam, Thomas H.
1994-01-01
This paper introduces a tensor-Krylov method, the tensor-GMRES method, for large sparse systems of nonlinear equations. This method is a coupling of tensor model formation and solution techniques for nonlinear equations with Krylov subspace projection techniques for unsymmetric systems of linear equations. Traditional tensor methods for nonlinear equations are based on a quadratic model of the nonlinear function, a standard linear model augmented by a simple second order term. These methods are shown to be significantly more efficient than standard methods both on nonsingular problems and on problems where the Jacobian matrix at the solution is singular. A major disadvantage of the traditional tensor methods is that the solution of the tensor model requires the factorization of the Jacobian matrix, which may not be suitable for problems where the Jacobian matrix is large and has a 'bad' sparsity structure for an efficient factorization. We overcome this difficulty by forming and solving the tensor model using an extension of a Newton-GMRES scheme. Like traditional tensor methods, we show that the new tensor method has significant computational advantages over the analogous Newton counterpart. Consistent with Krylov subspace based methods, the new tensor method does not depend on the factorization of the Jacobian matrix. As a matter of fact, the Jacobian matrix is never needed explicitly.
A quantitative method for photovoltaic encapsulation system optimization
NASA Technical Reports Server (NTRS)
Garcia, A., III; Minning, C. P.; Cuddihy, E. F.
1981-01-01
It is pointed out that the design of encapsulation systems for flat plate photovoltaic modules requires the fulfillment of conflicting design requirements. An investigation was conducted with the objective to find an approach which will make it possible to determine a system with optimum characteristics. The results of the thermal, optical, structural, and electrical isolation analyses performed in the investigation indicate the major factors in the design of terrestrial photovoltaic modules. For defect-free materials, minimum encapsulation thicknesses are determined primarily by structural considerations. Cell temperature is not strongly affected by encapsulant thickness or thermal conductivity. The emissivity of module surfaces exerts a significant influence on cell temperature. Encapsulants should be elastomeric, and ribs are required on substrate modules. Aluminum is unsuitable as a substrate material. Antireflection coating is required on cell surfaces.
Task analysis method for procedural training curriculum development.
Riggle, Jakeb D; Wadman, Michael C; McCrory, Bernadette; Lowndes, Bethany R; Heald, Elizabeth A; Carstens, Patricia K; Hallbeck, M Susan
2014-06-01
A central venous catheter (CVC) is an important medical tool used in critical care and emergent situations. Integral to proper care in many circumstances, insertion of a CVC introduces the risk of central line-associated blood stream infections and mechanical adverse events; proper training is important for safe CVC insertion. Cognitive task analysis (CTA) methods have been successfully implemented in the medical field to improve the training of postgraduate medical trainees, but can be very time-consuming to complete and require a significant time commitment from many subject matter experts (SMEs). Many medical procedures such as CVC insertion are linear processes with well-documented procedural steps. These linear procedures may not require a traditional CTA to gather the information necessary to create a training curriculum. Accordingly, a novel, streamlined CTA method designed primarily to collect cognitive cues for linear procedures was developed to be used by medical professionals with minimal CTA training. This new CTA methodology required fewer trained personnel, fewer interview sessions, and less time commitment from SMEs than a traditional CTA. Based on this study, a streamlined CTA methodology can be used to efficiently gather cognitive information on linear medical procedures for the creation of resident training curricula and procedural skills assessments.
Evaluation of uncertainty for regularized deconvolution: A case study in hydrophone measurements.
Eichstädt, S; Wilkens, V
2017-06-01
An estimation of the measurand in dynamic metrology usually requires a deconvolution based on a dynamic calibration of the measuring system. Since deconvolution is, mathematically speaking, an ill-posed inverse problem, some kind of regularization is required to render the problem stable and obtain usable results. Many approaches to regularized deconvolution exist in the literature, but the corresponding evaluation of measurement uncertainties is, in general, an unsolved issue. In particular, the uncertainty contribution of the regularization itself is a topic of great importance, because it has a significant impact on the estimation result. Here, a versatile approach is proposed to express prior knowledge about the measurand based on a flexible, low-dimensional modeling of an upper bound on the magnitude spectrum of the measurand. This upper bound allows the derivation of an uncertainty associated with the regularization method in line with the guidelines in metrology. As a case study for the proposed method, hydrophone measurements in medical ultrasound with an acoustic working frequency of up to 7.5 MHz are considered, but the approach is applicable for all kinds of estimation methods in dynamic metrology, where regularization is required and which can be expressed as a multiplication in the frequency domain.
Diffraction based overlay metrology for α-carbon applications
NASA Astrophysics Data System (ADS)
Saravanan, Chandra Saru; Tan, Asher; Dasari, Prasad; Goelzer, Gary; Smith, Nigel; Woo, Seouk-Hoon; Shin, Jang Ho; Kang, Hyun Jae; Kim, Ho Chul
2008-03-01
Applications that require overlay measurement between layers separated by absorbing interlayer films (such as α- carbon) pose significant challenges for sub-50nm processes. In this paper scatterometry methods are investigated as an alternative to meet these stringent overlay metrology requirements. In this article, a spectroscopic Diffraction Based Overlay (DBO) measurement technique is used where registration errors are extracted from specially designed diffraction targets. DBO measurements are performed on detailed set of wafers with varying α-carbon (ACL) thicknesses. The correlation in overlay values between wafers with varying ACL thicknesses will be discussed. The total measurement uncertainty (TMU) requirements for these layers are discussed and the DBO TMU results from sub-50nm samples are reviewed.
Jolfaie, Nahid Ramezani; Rouhani, Mohammad Hossein; Mirlohi, Maryam; Babashahi, Mina; Abbasi, Saeid; Adibi, Peiman; Esmaillzadeh, Ahmad; Azadbakht, Leila
2017-01-01
Background: Nutritional support plays a major role in the management of critically ill patients. This study aimed to compare the nutritional quality of enteral nutrition solutions (noncommercial vs. commercial) and the amount of energy and nutrients delivered and required in patients receiving these solutions. Materials and Methods: This cross-sectional study was conducted among 270 enterally fed patients. Demographic and clinical data in addition to values of nutritional needs and intakes were collected. Moreover, enteral nutrition solutions were analyzed in a food laboratory. Results: There were 150 patients who fed noncommercial enteral nutrition solutions (NCENS) and 120 patients who fed commercial enteral nutrition solutions (CENSs). Although energy and nutrients contents in CENSs were more than in NCENSs, these differences regarding energy, protein, carbohydrates, phosphorus, and calcium were not statistically significant. The values of energy and macronutrients delivered in patients who fed CENSs were higher (P < 0.001). Energy, carbohydrate, and fat required in patients receiving CENSs were provided, but protein intake was less than the required amount. In patients who fed NCENSs, only the values of fat requirement and intake were not significantly different, but other nutrition delivered was less than required amounts (P < 0.001). CENSs provided the nutritional needs of higher numbers of patients (P < 0.001). In patients receiving CENSs, nutrient adequacy ratio and also mean adequacy ratio were significantly higher than the other group (P < 0.001). Conclusion: CENSs contain more energy and nutrients compared with NCENSs. They are more effective to meet the nutritional requirements of entirely fed patients. PMID:29142894
A comparison of the weights-of-evidence method and probabilistic neural networks
Singer, Donald A.; Kouda, Ryoichi
1999-01-01
The need to integrate large quantities of digital geoscience information to classify locations as mineral deposits or nondeposits has been met by the weights-of-evidence method in many situations. Widespread selection of this method may be more the result of its ease of use and interpretation rather than comparisons with alternative methods. A comparison of the weights-of-evidence method to probabilistic neural networks is performed here with data from Chisel Lake-Andeson Lake, Manitoba, Canada. Each method is designed to estimate the probability of belonging to learned classes where the estimated probabilities are used to classify the unknowns. Using these data, significantly lower classification error rates were observed for the neural network, not only when test and training data were the same (0.02 versus 23%), but also when validation data, not used in any training, were used to test the efficiency of classification (0.7 versus 17%). Despite these data containing too few deposits, these tests of this set of data demonstrate the neural network's ability at making unbiased probability estimates and lower error rates when measured by number of polygons or by the area of land misclassified. For both methods, independent validation tests are required to ensure that estimates are representative of real-world results. Results from the weights-of-evidence method demonstrate a strong bias where most errors are barren areas misclassified as deposits. The weights-of-evidence method is based on Bayes rule, which requires independent variables in order to make unbiased estimates. The chi-square test for independence indicates no significant correlations among the variables in the Chisel Lake–Andeson Lake data. However, the expected number of deposits test clearly demonstrates that these data violate the independence assumption. Other, independent simulations with three variables show that using variables with correlations of 1.0 can double the expected number of deposits as can correlations of −1.0. Studies done in the 1970s on methods that use Bayes rule show that moderate correlations among attributes seriously affect estimates and even small correlations lead to increases in misclassifications. Adverse effects have been observed with small to moderate correlations when only six to eight variables were used. Consistent evidence of upward biased probability estimates from multivariate methods founded on Bayes rule must be of considerable concern to institutions and governmental agencies where unbiased estimates are required. In addition to increasing the misclassification rate, biased probability estimates make classification into deposit and nondeposit classes an arbitrary subjective decision. The probabilistic neural network has no problem dealing with correlated variables—its performance depends strongly on having a thoroughly representative training set. Probabilistic neural networks or logistic regression should receive serious consideration where unbiased estimates are required. The weights-of-evidence method would serve to estimate thresholds between anomalies and background and for exploratory data analysis.
Volumetric calibration of a plenoptic camera.
Hall, Elise Munz; Fahringer, Timothy W; Guildenbecher, Daniel R; Thurow, Brian S
2018-02-01
The volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creation of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.
Least-squares finite element solution of 3D incompressible Navier-Stokes problems
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Lin, Tsung-Liang; Povinelli, Louis A.
1992-01-01
Although significant progress has been made in the finite element solution of incompressible viscous flow problems. Development of more efficient methods is still needed before large-scale computation of 3D problems becomes feasible. This paper presents such a development. The most popular finite element method for the solution of incompressible Navier-Stokes equations is the classic Galerkin mixed method based on the velocity-pressure formulation. The mixed method requires the use of different elements to interpolate the velocity and the pressure in order to satisfy the Ladyzhenskaya-Babuska-Brezzi (LBB) condition for the existence of the solution. On the other hand, due to the lack of symmetry and positive definiteness of the linear equations arising from the mixed method, iterative methods for the solution of linear systems have been hard to come by. Therefore, direct Gaussian elimination has been considered the only viable method for solving the systems. But, for three-dimensional problems, the computer resources required by a direct method become prohibitively large. In order to overcome these difficulties, a least-squares finite element method (LSFEM) has been developed. This method is based on the first-order velocity-pressure-vorticity formulation. In this paper the LSFEM is extended for the solution of three-dimensional incompressible Navier-Stokes equations written in the following first-order quasi-linear velocity-pressure-vorticity formulation.
Zhu, Mei; Yang, Zhenyu; Ren, Yiping; Duan, Yifan; Gao, Huiyu; Liu, Biao; Ye, Wenhui; Wang, Jie; Yin, Shian
2017-01-01
Macronutrient contents in human milk are the common basis for estimating these nutrient requirements for both infants and lactating women. A mid-infrared human milk analyser (HMA, Miris, Sweden) was recently developed for determining macronutrient levels. The purpose of the study is to compare the accuracy and precision of HMA method with fresh milk samples in the field studies with chemical methods with frozen samples in the lab. Full breast milk was collected using electric pumps and fresh milk was analyzed in the field studies using HMA. All human milk samples were thawed and analyzed with chemical reference methods in the lab. The protein, fat and total solid levels were significantly correlated between the two methods and the correlation coefficient was 0.88, 0.93 and 0.78, respectively (p < 0.001). The mean protein content was significantly lower and the mean fat level was significantly greater when measured using HMA method (1.0 g 100 mL -1 vs 1.2 g 100 mL -1 and 3. 7 g 100 mL -1 vs 3.2 g 100 mL -1 , respectively, p < 0.001). Thus, linear recalibration could be used to improve mean estimation for both protein and fat. There was no significant correlation for lactose between the two methods (p > 0.05). There was no statistically significant difference in the mean total solid concentration (12.2 g 100 mL -1 vs 12.3 g 100 mL -1 , p > 0.05). Overall, HMA might be used to analyze macronutrients in fresh human milk with acceptable accuracy and precision after recalibrating fat and protein levels of field samples. © 2016 John Wiley & Sons Ltd.
Verlinden, Nathan; Kruger, Nicholas; Carroll, Ailey; Trumbo, Tiffany
2015-01-01
Objective. To determine if the process-oriented guided inquiry learning (POGIL) teaching strategy improves student performance and engages higher-level thinking skills of first-year pharmacy students in an Introduction to Pharmaceutical Sciences course. Design. Overall examination scores and scores on questions categorized as requiring either higher-level or lower-level thinking skills were compared in the same course taught over 3 years using traditional lecture methods vs the POGIL strategy. Student perceptions of the latter teaching strategy were also evaluated. Assessment. Overall mean examination scores increased significantly when POGIL was implemented. Performance on questions requiring higher-level thinking skills was significantly higher, whereas performance on questions requiring lower-level thinking skills was unchanged when the POGIL strategy was used. Student feedback on use of this teaching strategy was positive. Conclusion. The use of the POGIL strategy increased student overall performance on examinations, improved higher-level thinking skills, and provided an interactive class setting. PMID:25741027
Heated air humidification versus cold air nebulization in newly tracheostomized patients
Händel, Alexander; Wenzel, Angela; Kramer, Benedikt; Aderhold, Christoph; Hörmann, Karl; Stuck, Boris A.; Sommer, J. Ulrich
2017-01-01
Abstract Background After tracheostomy, the airway lacks an essential mechanism for warming and humidifying the inspired air with the consequent functional impairment and discomfort. The purpose of this study was to compare airway hydration with cold‐air nebulization versus heated high‐flow humidification on medical interventions and tracheal ciliary beat frequency (CBF). Methods Newly tracheostomized patients (n = 20) were treated either with cold‐air nebulization or heated humidification. The number of required tracheal suctioning procedures to clean the trachea and tracheal CBF were assessed. Results The number of required suctions per day was significantly lower in the heated humidification group with medians 3 versus 5 times per day. Mean CBF was significantly higher in the heated humidification group (6.36 ± 1.49 Hz) compared to the cold‐air nebulization group (3.99 ± 1.39 Hz). Conclusion The data suggest that heated humidification enhanced mucociliary transport leading to a reduced number of required suctioning procedures in the trachea, which may improve postoperative patient care. PMID:28990261
Mirsadraee, Majid; Shafahie, Ahmad; Reza Khakzad, Mohammad; Sankian, Mojtaba
2014-04-01
Anthracofibrosis is the black discoloration of the bronchial mucosa with deformity and obstruction. Association of this disease with tuberculosis (TB) was approved. The objective of this study was to find the additional benefit of assessment of TB by the polymerase chain reaction (PCR) method. Bronchoscopy was performed on 103 subjects (54 anthracofibrosis and 49 control subjects) who required bronchoscopy for their pulmonary problems. According to bronchoscopic findings, participants were classified to anthracofibrosis and nonanthracotic groups. They were examined for TB with traditional methods such as direct smear (Ziehl-Neelsen staining), Löwenstein-Jensen culture, and histopathology and the new method "PCR" for Mycobacterium tuberculosis genome (IS6110). Age, sex, smoking, and clinical findings were not significantly different in the TB and the non-TB groups. Acid-fast bacilli could be detected by a direct smear in 12 (25%) of the anthracofibrosis subjects, and adding the results of culture and histopathology traditional tests indicated TB in 27 (31%) of the cases. Mycobacterium tuberculosis was diagnosed by PCR in 18 (33%) patients, but the difference was not significant. Detection of acid-fast bacilli in control nonanthracosis subjects was significantly lower (3, 6%), but PCR (20, 40%) and accumulation of results from all traditional methods (22, 44%) showed a nonsignificant difference. The PCR method showed a result equal to traditional methods including accumulation of smear, culture, and histopathology.
Yadav, Ghanshyam; Jain, Gaurav; Samprathi, Abhishek; Baghel, Annavi; Singh, Dinesh Kumar
2016-01-01
Background and Aims: Poorly managed acute postoperative pain may result in prolonged morbidity. Various pharmacotherapies have targeted this, but research on an ideal preemptive analgesic continues, taking into account drug-related side effects. Considering the better tolerability profile of tapentadol, we assessed its role as a preemptive analgesic in the reduction of postoperative analgesic requirements, after laparoscopic cholecystectomy. Material and Methods: In a prospective-double-blinded fashion, sixty patients posted for above surgery, were randomized to receive tablet tapentadol 75 mg (Group A) or starch tablets (Group B) orally, an hour before induction of general anesthesia. Perioperative analgesic requirement, time to first analgesia, pain, and sedation score were compared for first 24 h during the postoperative period and analyzed by one-way analysis of variance test. A P < 0.05 was considered significant. Results: Sixty patients were analyzed. The perioperative analgesic requirement was significantly lower in Group A. Verbal numerical score was significantly lower in Group A at the time point, immediately after shifting the patient to the postanesthesia care unit. Ramsay sedation scores were similar between the groups. No major side effects were observed except for nausea and vomiting in 26 cases (10 in Group A, 16 in Group B). Conclusion: Single preemptive oral dose of tapentadol (75 mg) is effective in reducing perioperative analgesic requirements and acute postoperative pain, without added side effects. It could be an appropriate preemptive analgesic, subjected to future trials concentrating upon its dose-response effects. PMID:28096581
Capitano, Cinzia; Peri, Giorgia; Rizzo, Gianfranco; Ferrante, Patrizia
2017-03-01
Marble is a natural dimension stone that is widely used in building due to its resistance and esthetic qualities. Unfortunately, some concerns have arisen regarding its production process because quarrying and processing activities demand significant amounts of energy and greatly affect the environment. Further, performing an environmental analysis of a production process such as that of marble requires the consideration of many environmental aspects (e.g., noise, vibrations, dust and waste production, energy consumption). Unfortunately, the current impact accounting tools do not seem to be capable of considering all of the major aspects of the (marble) production process that may affect the environment and thus cannot provide a comprehensive and concise assessment of all environmental aspects associated with the marble production process. Therefore, innovative, easy, and reliable methods for evaluating its environmental impact are necessary, and they must be accessible for the non-technician. The present study intends to provide a contribution in this sense by proposing a reliable and easy-to-use evaluation method to assess the significance of the environmental impacts associated with the marble production process. In addition, an application of the method to an actual marble-producing company is presented to demonstrate its practicability. Because of its relative ease of use, the method presented here can also be used as a "self-assessment" tool for pursuing a virtuous environmental policy because it enables company owners to easily identify the segments of their production chain that most require environmental enhancement.
Tsai, Po-Yen; Lee, I-Chin; Hsu, Hsin-Yun; Huang, Hong-Yuan; Fan, Shih-Kang; Liu, Cheng-Hsien
2016-01-01
Here, we describe a technique to manipulate a low number of beads to achieve high washing efficiency with zero bead loss in the washing process of a digital microfluidic (DMF) immunoassay. Previously, two magnetic bead extraction methods were reported in the DMF platform: (1) single-side electrowetting method and (2) double-side electrowetting method. The first approach could provide high washing efficiency, but it required a large number of beads. The second approach could reduce the required number of beads, but it was inefficient where multiple washes were required. More importantly, bead loss during the washing process was unavoidable in both methods. Here, an improved double-side electrowetting method is proposed for bead extraction by utilizing a series of unequal electrodes. It is shown that, with proper electrode size ratio, only one wash step is required to achieve 98% washing rate without any bead loss at bead number less than 100 in a droplet. It allows using only about 25 magnetic beads in DMF immunoassay to increase the number of captured analytes on each bead effectively. In our human soluble tumor necrosis factor receptor I (sTNF-RI) model immunoassay, the experimental results show that, comparing to our previous results without using the proposed bead extraction technique, the immunoassay with low bead number significantly enhances the fluorescence signal to provide a better limit of detection (3.14 pg/ml) with smaller reagent volumes (200 nl) and shorter analysis time (<1 h). This improved bead extraction technique not only can be used in the DMF immunoassay but also has great potential to be used in any other bead-based DMF systems for different applications. PMID:26858807
NASA Astrophysics Data System (ADS)
Lotfy, Hayam Mahmoud; Hegazy, Maha Abdel Monem
2013-09-01
Four simple, specific, accurate and precise spectrophotometric methods manipulating ratio spectra were developed and validated for simultaneous determination of simvastatin (SM) and ezetimibe (EZ) namely; extended ratio subtraction (EXRSM), simultaneous ratio subtraction (SRSM), ratio difference (RDSM) and absorption factor (AFM). The proposed spectrophotometric procedures do not require any preliminary separation step. The accuracy, precision and linearity ranges of the proposed methods were determined, and the methods were validated and the specificity was assessed by analyzing synthetic mixtures containing the cited drugs. The four methods were applied for the determination of the cited drugs in tablets and the obtained results were statistically compared with each other and with those of a reported HPLC method. The comparison showed that there is no significant difference between the proposed methods and the reported method regarding both accuracy and precision.
NASA Astrophysics Data System (ADS)
Wu, Linqin; Xu, Sheng; Jiang, Dezhi
2015-12-01
Industrial wireless networked control system has been widely used, and how to evaluate the performance of the wireless network is of great significance. In this paper, considering the shortcoming of the existing performance evaluation methods, a comprehensive performance evaluation method of networks multi-indexes fuzzy analytic hierarchy process (MFAHP) combined with the fuzzy mathematics and the traditional analytic hierarchy process (AHP) is presented. The method can overcome that the performance evaluation is not comprehensive and subjective. Experiments show that the method can reflect the network performance of real condition. It has direct guiding role on protocol selection, network cabling, and node setting, and can meet the requirements of different occasions by modifying the underlying parameters.
Giske, Christian G.; Haldorsen, Bjørg; Matuschek, Erika; Schønning, Kristian; Leegaard, Truls M.; Kahlmeter, Gunnar
2014-01-01
Different antimicrobial susceptibility testing methods to detect low-level vancomycin resistance in enterococci were evaluated in a Scandinavian multicenter study (n = 28). A phenotypically and genotypically well-characterized diverse collection of Enterococcus faecalis (n = 12) and Enterococcus faecium (n = 18) strains with and without nonsusceptibility to vancomycin was examined blindly in Danish (n = 5), Norwegian (n = 13), and Swedish (n = 10) laboratories using the EUCAST disk diffusion method (n = 28) and the CLSI agar screen (n = 18) or the Vitek 2 system (bioMérieux) (n = 5). The EUCAST disk diffusion method (very major error [VME] rate, 7.0%; sensitivity, 0.93; major error [ME] rate, 2.4%; specificity, 0.98) and CLSI agar screen (VME rate, 6.6%; sensitivity, 0.93; ME rate, 5.6%; specificity, 0.94) performed significantly better (P = 0.02) than the Vitek 2 system (VME rate, 13%; sensitivity, 0.87; ME rate, 0%; specificity, 1). The performance of the EUCAST disk diffusion method was challenged by differences in vancomycin inhibition zone sizes as well as the experience of the personnel in interpreting fuzzy zone edges as an indication of vancomycin resistance. Laboratories using Oxoid agar (P < 0.0001) or Merck Mueller-Hinton (MH) agar (P = 0.027) for the disk diffusion assay performed significantly better than did laboratories using BBL MH II medium. Laboratories using Difco brain heart infusion (BHI) agar for the CLSI agar screen performed significantly better (P = 0.017) than did those using Oxoid BHI agar. In conclusion, both the EUCAST disk diffusion and CLSI agar screening methods performed acceptably (sensitivity, 0.93; specificity, 0.94 to 0.98) in the detection of VanB-type vancomycin-resistant enterococci with low-level resistance. Importantly, use of the CLSI agar screen requires careful monitoring of the vancomycin concentration in the plates. Moreover, disk diffusion methodology requires that personnel be trained in interpreting zone edges. PMID:24599985
Effect of handpiece maintenance method on bond strength.
Roberts, Howard W; Vandewalle, Kraig S; Charlton, David G; Leonard, Daniel L
2005-01-01
This study evaluated the effect of dental handpiece lubricant on the shear bond strength of three bonding agents to dentin. A lubrication-free handpiece (one that does not require the user to lubricate it) and a handpiece requiring routine lubrication were used in the study. In addition, two different handpiece lubrication methods (automated versus manual application) were also investigated. One hundred and eighty extracted human teeth were ground to expose flat dentin surfaces that were then finished with wet silicon carbide paper. The teeth were randomly divided into 18 groups (n=10). The dentin surface of each specimen was exposed for 30 seconds to water spray from either a lubrication-free handpiece or a lubricated handpiece. Prior to exposure, various lubrication regimens were used on the handpieces that required lubrication. The dentin surfaces were then treated with total-etch, two-step; a self-etch, two-step or a self-etch, one-step bonding agent. Resin composite cylinders were bonded to dentin, the specimens were then thermocycled and tested to failure in shear at seven days. Mean bond strength data were analyzed using Dunnett's multiple comparison test at an 0.05 level of significance. Results indicated that within each of the bonding agents, there were no significant differences in bond strength between the control group and the treatment groups regardless of the type of handpiece or use of routine lubrication.
Automatic classification of bottles in crates
NASA Astrophysics Data System (ADS)
Aas, Kjersti; Eikvil, Line; Bremnes, Dag; Norbryhn, Andreas
1995-03-01
This paper presents a statistical method for classification of bottles in crates for use in automatic return bottle machines. For the automatons to reimburse the correct deposit, a reliable recognition is important. The images are acquired by a laser range scanner coregistering the distance to the object and the strength of the reflected signal. The objective is to identify the crate and the bottles from a library with a number of legal types. The bottles with significantly different size are separated using quite simple methods, while a more sophisticated recognizer is required to distinguish the more similar bottle types. Good results have been obtained when testing the method developed on bottle types which are difficult to distinguish using simple methods.
NASA Technical Reports Server (NTRS)
Wilson, S.
1977-01-01
A method is presented for the determination of the representation matrices of the spin permutation group (symmetric group), a detailed knowledge of these matrices being required in the study of the electronic structure of atoms and molecules. The method is characterized by the use of two different coupling schemes. Unlike the Yamanouchi spin algebraic scheme, the method is not recursive. The matrices for the fundamental transpositions can be written down directly in one of the two bases. The method results in a computationally significant reduction in the number of matrix elements that have to be stored when compared with, say, the standard Young tableaux group theoretical approach.
CeasIng Cpap At standarD criteriA (CICADA): predicting a successful outcome.
Yin, Yue; Broom, Margaret; Wright, Audrey; Hovey, Donna; Abdel-Latif, Mohamed E; Shadbolt, Bruce; Todd, David A
2016-01-01
This is a retrospective analysis of a multicentre randomised controlled trial (RCT) where we concluded that CeasIng Cpap At standerD criteriA (CICADA) in premature babies (PBs) <30 weeks gestational age (GA) was the significantly better method of ceasing CPAP. To identify factors that may influence the number of attempts to cease CPAP, we reviewed the records of 50 PBs from the RCT who used the CICADA method. PBs were grouped according to number of attempts to cease CPAP (fast group ≤2 attempts and slow group >2 attempts to cease CPAP). There were 26 (fast group) and 24 (slow group) PBs included in the analysis. Results showed significant differences in mean GA (27.8 ± 0.3 vs 26.9 ± 0.3 [weeks ± SE], p = 0.03) and birth weight ([Bwt]; 1080 ± 48.8 vs 899 ± 45.8 [grams ± SE], p = 0.01) between groups. Significantly fewer PBs in the fast group had a patent ductus arteriosus (PDA) compared to the slow group (5/26 (19.2%) vs 13/24 (54.2 %), p = 0.02). Bwt was a significant negative predictor of CPAP duration (r = -0.497, p = 0.03) and CPAP ceasing attempts (r = -0.290, p = 0.04). PBs with a higher GA and Bwt without a PDA ceased CPAP earlier using the CICADA method. Bwt was better than GA for predicting CPAP duration and attempts to cease CPAP. Our previous studies showed that CeasIng Cpap At standarD criteriA (CICADA) significantly reduces CPAP time, oxygen requirements and caffeine use. Some PBs however using the CICADA method required >2 attempts to cease CPAP ('slow CICADA' group). PBs in the 'fast CICADA' group (<3 attempts to cease CPAP) (a) have longer gestational age and higher birth weight, (b) shorter mechanical ventilation and (c) lower incidence of patent ductus arteriosus. Attempts to cease CPAP decreased by 0.5 times per 1 week increase in GA and 0.3 times per 100-g increase in birth weight for PBs <30 weeks gestation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pask, J E; Sukumar, N; Guney, M
2011-02-28
Over the course of the past two decades, quantum mechanical calculations have emerged as a key component of modern materials research. However, the solution of the required quantum mechanical equations is a formidable task and this has severely limited the range of materials systems which can be investigated by such accurate, quantum mechanical means. The current state of the art for large-scale quantum simulations is the planewave (PW) method, as implemented in now ubiquitous VASP, ABINIT, and QBox codes, among many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points inmore » space, and in which every basis function overlaps every other at every point, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires substantial nonlocal communications in parallel implementations, placing critical limits on scalability. In recent years, real-space methods such as finite-differences (FD) and finite-elements (FE) have been developed to address these deficiencies by reformulating the required quantum mechanical equations in a strictly local representation. However, while addressing both resolution and parallel-communications problems, such local real-space approaches have been plagued by one key disadvantage relative to planewaves: excessive degrees of freedom (grid points, basis functions) needed to achieve the required accuracies. And so, despite critical limitations, the PW method remains the standard today. In this work, we show for the first time that this key remaining disadvantage of real-space methods can in fact be overcome: by building known atomic physics into the solution process using modern partition-of-unity (PU) techniques in finite element analysis. Indeed, our results show order-of-magnitude reductions in basis size relative to state-of-the-art planewave based methods. The method developed here is completely general, applicable to any crystal symmetry and to both metals and insulators alike. We have developed and implemented a full self-consistent Kohn-Sham method, including both total energies and forces for molecular dynamics, and developed a full MPI parallel implementation for large-scale calculations. We have applied the method to the gamut of physical systems, from simple insulating systems with light atoms to complex d- and f-electron systems, requiring large numbers of atomic-orbital enrichments. In every case, the new PU FE method attained the required accuracies with substantially fewer degrees of freedom, typically by an order of magnitude or more, than the current state-of-the-art PW method. Finally, our initial MPI implementation has shown excellent parallel scaling of the most time-critical parts of the code up to 1728 processors, with clear indications of what will be required to achieve comparable scaling for the rest. Having shown that the key remaining disadvantage of real-space methods can in fact be overcome, the work has attracted significant attention: with sixteen invited talks, both domestic and international, so far; two papers published and another in preparation; and three new university and/or national laboratory collaborations, securing external funding to pursue a number of related research directions. Having demonstrated the proof of principle, work now centers on the necessary extensions and optimizations required to bring the prototype method and code delivered here to production applications.« less
On-orbit calibration for star sensors without priori information.
Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Zhang, Chengfen; Yang, Yanqiang
2017-07-24
The star sensor is a prerequisite navigation device for a spacecraft. The on-orbit calibration is an essential guarantee for its operation performance. However, traditional calibration methods rely on ground information and are invalid without priori information. The uncertain on-orbit parameters will eventually influence the performance of guidance navigation and control system. In this paper, a novel calibration method without priori information for on-orbit star sensors is proposed. Firstly, the simplified back propagation neural network is designed for focal length and main point estimation along with system property evaluation, called coarse calibration. Then the unscented Kalman filter is adopted for the precise calibration of all parameters, including focal length, main point and distortion. The proposed method benefits from self-initialization and no attitude or preinstalled sensor parameter is required. Precise star sensor parameter estimation can be achieved without priori information, which is a significant improvement for on-orbit devices. Simulations and experiments results demonstrate that the calibration is easy for operation with high accuracy and robustness. The proposed method can satisfy the stringent requirement for most star sensors.
Real time charge efficiency monitoring for nickel electrodes in NICD and NIH2 cells
NASA Astrophysics Data System (ADS)
Zimmerman, A. H.
1987-09-01
The charge efficiency of nickel-cadmium and nickel-hydrogen battery cells is critical in spacecraft applications for determining the amount of time required for a battery to reach a full state of charge. As the nickel-cadmium or nickel-hydrogen batteries approach about 90 percent state of charge, the charge efficiency begins to drop towards zero, making estimation of the total amount of stored charge uncertain. Charge efficiency estimates are typically based on prior history of available capacity following standardized conditions for charge and discharge. These methods work well as long as performance does not change significantly. A relatively simple method for determining charge efficiencies during real time operation for these battery cells would be a tremendous advantage. Such a method was explored and appears to be quite well suited for application to nickel-cadmium and nickel-hydrogen battery cells. The charge efficiency is monitored in real time, using only voltage measurements as inputs. With further evaluation such a method may provide a means to better manage charge control of batteries, particularly in systems where a high degree of autonomy or system intelligence is required.
Formal Methods of V&V of Partial Specifications: An Experience Report
NASA Technical Reports Server (NTRS)
Easterbrook, Steve; Callahan, John
1997-01-01
This paper describes our work exploring the suitability of formal specification methods for independent verification and validation (IV&V) of software specifications for large, safety critical systems. An IV&V contractor often has to perform rapid analysis on incomplete specifications, with no control over how those specifications are represented. Lightweight formal methods show significant promise in this context, as they offer a way of uncovering major errors, without the burden of full proofs of correctness. We describe an experiment in the application of the method SCR. to testing for consistency properties of a partial model of requirements for Fault Detection Isolation and Recovery on the space station. We conclude that the insights gained from formalizing a specification is valuable, and it is the process of formalization, rather than the end product that is important. It was only necessary to build enough of the formal model to test the properties in which we were interested. Maintenance of fidelity between multiple representations of the same requirements (as they evolve) is still a problem, and deserves further study.
Real time charge efficiency monitoring for nickel electrodes in NICD and NIH2 cells
NASA Technical Reports Server (NTRS)
Zimmerman, A. H.
1987-01-01
The charge efficiency of nickel-cadmium and nickel-hydrogen battery cells is critical in spacecraft applications for determining the amount of time required for a battery to reach a full state of charge. As the nickel-cadmium or nickel-hydrogen batteries approach about 90 percent state of charge, the charge efficiency begins to drop towards zero, making estimation of the total amount of stored charge uncertain. Charge efficiency estimates are typically based on prior history of available capacity following standardized conditions for charge and discharge. These methods work well as long as performance does not change significantly. A relatively simple method for determining charge efficiencies during real time operation for these battery cells would be a tremendous advantage. Such a method was explored and appears to be quite well suited for application to nickel-cadmium and nickel-hydrogen battery cells. The charge efficiency is monitored in real time, using only voltage measurements as inputs. With further evaluation such a method may provide a means to better manage charge control of batteries, particularly in systems where a high degree of autonomy or system intelligence is required.
Chen, Chien-Jen; Guo, G Bih-Fang
2003-11-01
The optimal methods to perform external cardioversion of atrial fibrillation (AF) have yet to be conclusively determined. This study was performed to examine the relative efficacy of different pad positions on cardioversion success and the relationship between the transthoracic impedance (TTI) and energy requirement for AF cardioversion. Seventy patients with persistent AF undergoing elective cardioversion were randomly assigned to an electrode pad position situated either over the ventricular apex-right infraclavicular area (AL group, n = 31 ) or over the right lower sternal border-left infrascapular area close to the spine (AP group, n = 39). Energy was delivered at an initial 100 joules (J) and then increased to 150 J, 200 J, 300 J, and 360 J if needed. Energy and TTI readings were recorded. Mean TTI was significantly lower in the AP group than in the AL group. However, the cumulative success rates at each energy level were similar in the two groups (23% vs 19.4%, 41% vs 45.2%, 66.7% vs 74.2%, 79.5% vs 77.4%, and 84.6% vs 83.9% at 100 J, 150 J, 200 J, 300 J and 360 J, respectively). In the AP group, converters showed slightly lower TTI compared to nonconverters. In the AL group, converters showed significantly lower TTI compared to nonconverters. However, for all patients as a group, TTI was the only predictor for cardioversion success and showed a significant relationship to the energy required for cardioversion, which can be described by a quadratic equation. Rather than pad position. TTI is the single factor that significantly affects cardioversion and correlates with energy requirement. The relationship between energy requirement and TTI further allows estimation of energy requirements to achieve a successfil cardioversion.
NASA Astrophysics Data System (ADS)
Smith, Zachary J.; Gao, Tingjuan; Lin, Tzu-Yin; Carrade-Holt, Danielle; Lane, Stephen M.; Matthews, Dennis L.; Dwyre, Denis M.; Wachsmann-Hogiu, Sebastian
2016-03-01
Cell counting in human body fluids such as blood, urine, and CSF is a critical step in the diagnostic process for many diseases. Current automated methods for cell counting are based on flow cytometry systems. However, these automated methods are bulky, costly, require significant user expertise, and are not well suited to counting cells in fluids other than blood. Therefore, their use is limited to large central laboratories that process enough volume of blood to recoup the significant capital investment these instruments require. We present in this talk a combination of a (1) low-cost microscope system, (2) simple sample preparation method, and (3) fully automated analysis designed for providing cell counts in blood and body fluids. We show results on both humans and companion and farm animals, showing that accurate red cell, white cell, and platelet counts, as well as hemoglobin concentration, can be accurately obtained in blood, as well as a 3-part white cell differential in human samples. We can also accurately count red and white cells in body fluids with a limit of detection ~3 orders of magnitude smaller than current automated instruments. This method uses less than 1 microliter of blood, and less than 5 microliters of body fluids to make its measurements, making it highly compatible with finger-stick style collections, as well as appropriate for small animals such as laboratory mice where larger volume blood collections are dangerous to the animal's health.
Ohata, Erika; Matsuo, Kiyoshi; Ban, Ryokuya; Shiba, Masato; Yasunaga, Yoshichika
2013-01-01
Background: For surgical suturing, a Webster needle holder uses wrist supinating with supinator and extrinsic muscles, whereas a pen needle holder uses finger twisting with intrinsic and extrinsic muscles. Because the latter is better suited to microsurgery, which requires fine suturing with less forearm muscle movement, we have recently adopted an enlarged pen needle holder scaled from a micro needle holder for fine skin suturing. In this study, we assessed whether the enlarged pen needle holder reduced forearm muscle movement during fine skin suturing as compared with the Webster needle holder. Methods: A fine skin-suturing task was performed using pen holding with the enlarged micro needle holder or scissor holding with the Webster needle holder by 9 experienced and 6 inexperienced microsurgeons. The task lasted for 60 seconds and was randomly performed 3 times for each method. Forearm flexor and extensor muscular activities were evaluated by surface electromyography. Results: The enlarged pen needle holder method required significantly less forearm muscle movement for experienced microsurgeons despite it being their first time using the instrument. There was no significant difference between 2 methods for inexperienced microsurgeons. Conclusions: Experienced microsurgeons conserved forearm muscle movement by finger twisting in fine skin suturing with the enlarged pen needle holder. Inexperienced microsurgeons may benefit from the enlarged pen needle holder, even for fine skin suturing, to develop their internal acquisition model of the dynamics of finger twisting. PMID:23691259
Wagner, Maximilian E H; Gellrich, Nils-Claudius; Friese, Karl-Ingo; Becker, Matthias; Wolter, Franz-Erich; Lichtenstein, Juergen T; Stoetzer, Marcus; Rana, Majeed; Essig, Harald
2016-01-01
Objective determination of the orbital volume is important in the diagnostic process and in evaluating the efficacy of medical and/or surgical treatment of orbital diseases. Tools designed to measure orbital volume with computed tomography (CT) often cannot be used with cone beam CT (CBCT) because of inferior tissue representation, although CBCT has the benefit of greater availability and lower patient radiation exposure. Therefore, a model-based segmentation technique is presented as a new method for measuring orbital volume and compared to alternative techniques. Both eyes from thirty subjects with no known orbital pathology who had undergone CBCT as a part of routine care were evaluated (n = 60 eyes). Orbital volume was measured with manual, atlas-based, and model-based segmentation methods. Volume measurements, volume determination time, and usability were compared between the three methods. Differences in means were tested for statistical significance using two-tailed Student's t tests. Neither atlas-based (26.63 ± 3.15 mm(3)) nor model-based (26.87 ± 2.99 mm(3)) measurements were significantly different from manual volume measurements (26.65 ± 4.0 mm(3)). However, the time required to determine orbital volume was significantly longer for manual measurements (10.24 ± 1.21 min) than for atlas-based (6.96 ± 2.62 min, p < 0.001) or model-based (5.73 ± 1.12 min, p < 0.001) measurements. All three orbital volume measurement methods examined can accurately measure orbital volume, although atlas-based and model-based methods seem to be more user-friendly and less time-consuming. The new model-based technique achieves fully automated segmentation results, whereas all atlas-based segmentations at least required manipulations to the anterior closing. Additionally, model-based segmentation can provide reliable orbital volume measurements when CT image quality is poor.
Autonomous Assembly of Modular Structures in Space and on Extraterrestrial Locations
NASA Technical Reports Server (NTRS)
Alhorn, Dean C.
2005-01-01
The fulfillment of the new US. National Vision for Space Exploration requires many new enabling technologies to accomplish the goal of utilizing space for commercial activities and for returning humans to the moon and extraterrestrial environments. Traditionally, flight structures are manufactured as complete systems and require humans to complete the integration and assembly in orbit. These structures are bulky and require the use of heavy launch vehicles to send the units to the desired location, e.g. International Space Station (ISS). This method requires a high degree of safety, numerous space walks and significant cost for the humans to perform the assembly in orbit. For example, for assembly and maintenance of the ISS, 52 Extravehicular Activities (EVA's) have been performed so far with a total EVA time of approximately 322 hours. Sixteen (16) shuttle flights haw been to the ISS to perform these activities with an approximate cost of $450M per mission. For future space missions, costs have to be reduced to reasonably achieve the exploration goals. One concept that has been proposed is the autonomous assembly of space structures. This concept is an affordable, reliable solution to in-space and extraterrestrial assembly operations. Assembly is autonomously performed when two components containing onboard electronics join after recognizing that the joint is appropriate and in the precise position and orientation required for assembly. The mechanism only activates when the specifications are correct and m a nominal range. After assembly, local sensors and electronics monitor the integrity of the joint for feedback to a master controller. To achieve this concept will require a shift in the methods for designing space structures. In addition, innovative techniques will be required to perform the assembly autonomously. Monitoring of the assembled joint will be necessary for safety and structural integrity. If a very large structure is to be assembled in orbit, then the number of integrity sensors will be significant. Thus simple, low cost sensors are integral to the success of this concept. This paper will address these issues and will propose a novel concept for assembling space structures autonomously. The paper will present Several autonomous assembly methods. Core technologies required to achieve in space assembly will be discussed and novel techniques for communicating, sensing, docking and assembly will be detailed. These core technologies are critical to the goal of utilizing space in a cost efficient and safe manner. Finally, these technologies can also be applied to other systems both on earth and extraterrestrial environments.
Lewis-Fernández, Roberto; Aggarwal, Neil Krishan; Lam, Peter C; Galfalvy, Hanga; Weiss, Mitchell G; Kirmayer, Laurence J; Paralikar, Vasudeo; Deshpande, Smita N; Díaz, Esperanza; Nicasio, Andel V; Boiler, Marit; Alarcón, Renato D; Rohlof, Hans; Groen, Simon; van Dijk, Rob C J; Jadhav, Sushrut; Sarmukaddam, Sanjeev; Ndetei, David; Scalco, Monica Z; Bassiri, Kavoos; Aguilar-Gaxiola, Sergio; Ton, Hendry; Westermeyer, Joseph; Vega-Dienstmaier, Johann M
2017-04-01
Background There is a need for clinical tools to identify cultural issues in diagnostic assessment. Aims To assess the feasibility, acceptability and clinical utility of the DSM-5 Cultural Formulation Interview (CFI) in routine clinical practice. Method Mixed-methods evaluation of field trial data from six countries. The CFI was administered to diagnostically diverse psychiatric out-patients during a diagnostic interview. In post-evaluation sessions, patients and clinicians completed debriefing qualitative interviews and Likert-scale questionnaires. The duration of CFI administration and the full diagnostic session were monitored. Results Mixed-methods data from 318 patients and 75 clinicians found the CFI feasible, acceptable and useful. Clinician feasibility ratings were significantly lower than patient ratings and other clinician-assessed outcomes. After administering one CFI, however, clinician feasibility ratings improved significantly and subsequent interviews required less time. Conclusions The CFI was included in DSM-5 as a feasible, acceptable and useful cultural assessment tool. © The Royal College of Psychiatrists 2017.
Review of Batteryless Wireless Sensors Using Additively Manufactured Microwave Resonators.
Memon, Muhammad Usman; Lim, Sungjoon
2017-09-09
The significant improvements observed in the field of bulk-production of printed microchip technologies in the past decade have allowed the fabrication of microchip printing on numerous materials including organic and flexible substrates. Printed sensors and electronics are of significant interest owing to the fast and low-cost fabrication techniques used in their fabrication. The increasing amount of research and deployment of specially printed electronic sensors in a number of applications demonstrates the immense attention paid by researchers to this topic in the pursuit of achieving wider-scale electronics on different dielectric materials. Although there are many traditional methods for fabricating radio frequency (RF) components, they are time-consuming, expensive, complicated, and require more power for operation than additive fabrication methods. This paper serves as a summary/review of improvements made to the additive printing technologies. The article focuses on three recently developed printing methods for the fabrication of wireless sensors operating at microwave frequencies. The fabrication methods discussed include inkjet printing, three-dimensional (3D) printing, and screen printing.
Review of Batteryless Wireless Sensors Using Additively Manufactured Microwave Resonators
2017-01-01
The significant improvements observed in the field of bulk-production of printed microchip technologies in the past decade have allowed the fabrication of microchip printing on numerous materials including organic and flexible substrates. Printed sensors and electronics are of significant interest owing to the fast and low-cost fabrication techniques used in their fabrication. The increasing amount of research and deployment of specially printed electronic sensors in a number of applications demonstrates the immense attention paid by researchers to this topic in the pursuit of achieving wider-scale electronics on different dielectric materials. Although there are many traditional methods for fabricating radio frequency (RF) components, they are time-consuming, expensive, complicated, and require more power for operation than additive fabrication methods. This paper serves as a summary/review of improvements made to the additive printing technologies. The article focuses on three recently developed printing methods for the fabrication of wireless sensors operating at microwave frequencies. The fabrication methods discussed include inkjet printing, three-dimensional (3D) printing, and screen printing. PMID:28891947
NASA Astrophysics Data System (ADS)
Jiang, Zhen-Hua; Yan, Chao; Yu, Jian
2013-08-01
Two types of implicit algorithms have been improved for high order discontinuous Galerkin (DG) method to solve compressible Navier-Stokes (NS) equations on triangular grids. A block lower-upper symmetric Gauss-Seidel (BLU-SGS) approach is implemented as a nonlinear iterative scheme. And a modified LU-SGS (LLU-SGS) approach is suggested to reduce the memory requirements while retain the good convergence performance of the original LU-SGS approach. Both implicit schemes have the significant advantage that only the diagonal block matrix is stored. The resulting implicit high-order DG methods are applied, in combination with Hermite weighted essentially non-oscillatory (HWENO) limiters, to solve viscous flow problems. Numerical results demonstrate that the present implicit methods are able to achieve significant efficiency improvements over explicit counterparts and for viscous flows with shocks, and the HWENO limiters can be used to achieve the desired essentially non-oscillatory shock transition and the designed high-order accuracy simultaneously.
Compressive Spectral Method for the Simulation of the Nonlinear Gravity Waves
Bayındır, Cihan
2016-01-01
In this paper an approach for decreasing the computational effort required for the spectral simulations of the fully nonlinear ocean waves is introduced. The proposed approach utilizes the compressive sampling algorithm and depends on the idea of using a smaller number of spectral components compared to the classical spectral method. After performing the time integration with a smaller number of spectral components and using the compressive sampling technique, it is shown that the ocean wave field can be reconstructed with a significantly better efficiency compared to the classical spectral method. For the sparse ocean wave model in the frequency domain the fully nonlinear ocean waves with Jonswap spectrum is considered. By implementation of a high-order spectral method it is shown that the proposed methodology can simulate the linear and the fully nonlinear ocean waves with negligible difference in the accuracy and with a great efficiency by reducing the computation time significantly especially for large time evolutions. PMID:26911357
Evaluation of Adaptive Subdivision Method on Mobile Device
NASA Astrophysics Data System (ADS)
Rahim, Mohd Shafry Mohd; Isa, Siti Aida Mohd; Rehman, Amjad; Saba, Tanzila
2013-06-01
Recently, there are significant improvements in the capabilities of mobile devices; but rendering large 3D object is still tedious because of the constraint in resources of mobile devices. To reduce storage requirement, 3D object is simplified but certain area of curvature is compromised and the surface will not be smooth. Therefore a method to smoother selected area of a curvature is implemented. One of the popular methods is adaptive subdivision method. Experiments are performed using two data with results based on processing time, rendering speed and the appearance of the object on the devices. The result shows a downfall in frame rate performance due to the increase in the number of triangles with each level of iteration while the processing time of generating the new mesh also significantly increase. Since there is a difference in screen size between the devices the surface on the iPhone appears to have more triangles and more compact than the surface displayed on the iPad. [Figure not available: see fulltext.
New Laboratory Methods for Characterizing the Immersion Factors for Irradiance
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Zibordi, Giuseppe; DAlimonte, Davide; vaderLinde, Dirk; Brown, James W.
2003-01-01
The experimental determination of the immersion factor, I(sub f)(lambda), of irradiance collectors is a requirement of any in-water radiometer. The eighth SeaWiFS Intercalibration Round-Robin Experiment (SIRREX-8) showed different implementations, at different laboratories, of the same I(sub f)(lambda) measurement protocol. The different implementations make use of different setups, volumes, and water types. Consequently, they exhibit different accuracies and require different execution times for characterizing an irradiance sensor. In view of standardizing the characterization of I(sub f)(lambda) values for in-water radiometers, together with an increase in the accuracy of methods and a decrease in the execution time, alternative methods are presented, and assessed versus the traditional method. The proposed new laboratory methods include: a) the continuous method, in which optical measurements taken with discrete water depths are substituted by continuous profiles created by removing the water from the water vessel at a constant flow rate (which significantly reduces the time required for the characterization of a single radiometer); and b) the Compact Portable Advanced Characterization Tank (ComPACT) method, in which the commonly used large tanks are replaced by a small water vessel, thereby allowing the determination of I(sub f)(lambda) values with a small water volume, and more importantly, permitting I(sub f)(lambda) characterizations with pure water. Intercomparisons between the continuous and the traditional method showed results within the variance of I(sub f) (lambda) determinations. The use of the continuous method, however, showed a much shorter realization time. Intercomparisons between the ComPACT and the traditional method showed generally higher I(sub f)(lambda) values for the former. This is in agreement with the generalized expectations of a reduction in scattering effects, because of the use of pure water with the ComPACT method versus the use of tap water with the traditional method.
Downscaling Global Emissions and Its Implications Derived from Climate Model Experiments
Abe, Manabu; Kinoshita, Tsuguki; Hasegawa, Tomoko; Kawase, Hiroaki; Kushida, Kazuhide; Masui, Toshihiko; Oka, Kazutaka; Shiogama, Hideo; Takahashi, Kiyoshi; Tatebe, Hiroaki; Yoshikawa, Minoru
2017-01-01
In climate change research, future scenarios of greenhouse gas and air pollutant emissions generated by integrated assessment models (IAMs) are used in climate models (CMs) and earth system models to analyze future interactions and feedback between human activities and climate. However, the spatial resolutions of IAMs and CMs differ. IAMs usually disaggregate the world into 10–30 aggregated regions, whereas CMs require a grid-based spatial resolution. Therefore, downscaling emissions data from IAMs into a finer scale is necessary to input the emissions into CMs. In this study, we examined whether differences in downscaling methods significantly affect climate variables such as temperature and precipitation. We tested two downscaling methods using the same regionally aggregated sulfur emissions scenario obtained from the Asian-Pacific Integrated Model/Computable General Equilibrium (AIM/CGE) model. The downscaled emissions were fed into the Model for Interdisciplinary Research on Climate (MIROC). One of the methods assumed a strong convergence of national emissions intensity (e.g., emissions per gross domestic product), while the other was based on inertia (i.e., the base-year remained unchanged). The emissions intensities in the downscaled spatial emissions generated from the two methods markedly differed, whereas the emissions densities (emissions per area) were similar. We investigated whether the climate change projections of temperature and precipitation would significantly differ between the two methods by applying a field significance test, and found little evidence of a significant difference between the two methods. Moreover, there was no clear evidence of a difference between the climate simulations based on these two downscaling methods. PMID:28076446
Tume, Lyvonne N; Baines, Paul B; Guerrero, Rafael; Hurley, Margaret A; Johnson, Robert; Kalantre, Atul; Ramaraj, Ram; Ritson, Paul C; Walsh, Laura; Arnold, Philip D
2017-07-01
To determine the hemodynamic effect of tracheal suction method in the first 36 hours after high-risk infant heart surgery on the PICU and to compare open and closed suctioning techniques. Pilot randomized crossover study. Single PICU in United Kingdom. Infants undergoing surgical palliation with Norwood Sano, modified Blalock-Taussig shunt, or pulmonary artery banding in the first 36 hours postoperatively. Infants were randomized to receive open or closed (in-line) tracheal suctioning either for their first or second study tracheal suction in the first 36 hours postoperatively. Twenty-four infants were enrolled over 18 months, 11 after modified Blalock-Taussig shunt, seven after Norwood Sano, and six after pulmonary artery banding. Thirteen patients received the open suction method first followed by the closed suction method second, and 11 patients received the closed suction method first followed by the open suction method second in the first 36 hours after their surgery. There were statistically significant larger changes in heart rate (p = 0.002), systolic blood pressure (p = 0.022), diastolic blood pressure (p = 0.009), mean blood pressure (p = 0.007), and arterial saturation (p = 0.040) using the open suction method, compared with closed suctioning, although none were clinically significant (defined as requiring any intervention). There were no clinically significant differences between closed and open tracheal suction methods; however, there were statistically significant greater changes in some hemodynamic variables with open tracheal suctioning, suggesting that closed technique may be safer in children with more precarious physiology.
Finer, Lawrence B; Sonfield, Adam; Jones, Rachel K
2014-02-01
As part of the Affordable Care Act, a federal requirement for private health plans to cover contraceptive methods, services and counseling, without any out-of-pocket costs to patients, took effect for millions of Americans in January 2013. Data for this study come from a subset of the 3207 women aged 18-39 years who responded to two waves of a national longitudinal survey. This analysis focused on the 889 women who were using hormonal contraceptive methods in both the fall 2012 and spring 2013 waves and the 343 women who used the intrauterine device at either wave. Women were asked about the amount they paid out of pocket in an average month for their method of choice. Between Wave 1 and Wave 2, the proportion of privately insured women paying zero dollars out of pocket for oral contraceptives increased substantially, from 15% to 40%; by contrast, there was no significant change among publicly insured or uninsured women (whose coverage was not affected by the new federal requirement). Similar changes were seen among privately insured women using the vaginal ring. The initial implementation of the federal contraceptive coverage requirement appears to have had a notable impact on the out-of-pocket costs paid by privately insured women. Additional progress is likely as the requirement phases in to apply to more private plans, but with evidence that not all methods are being treated equally, policymakers should consider stepped-up oversight and enforcement of the provision. This study measures the out-of-pocket costs for women with private, public and no insurance prior to the federal contraceptive coverage requirement and after it took effect; in doing so, it highlights areas of progress in eliminating these costs and areas that need further progress. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Resource-constrained scheduling with hard due windows and rejection penalties
NASA Astrophysics Data System (ADS)
Garcia, Christopher
2016-09-01
This work studies a scheduling problem where each job must be either accepted and scheduled to complete within its specified due window, or rejected altogether. Each job has a certain processing time and contributes a certain profit if accepted or penalty cost if rejected. There is a set of renewable resources, and no resource limit can be exceeded at any time. Each job requires a certain amount of each resource when processed, and the objective is to maximize total profit. A mixed-integer programming formulation and three approximation algorithms are presented: a priority rule heuristic, an algorithm based on the metaheuristic for randomized priority search and an evolutionary algorithm. Computational experiments comparing these four solution methods were performed on a set of generated benchmark problems covering a wide range of problem characteristics. The evolutionary algorithm outperformed the other methods in most cases, often significantly, and never significantly underperformed any method.
A CLEAN-based method for mosaic deconvolution
NASA Astrophysics Data System (ADS)
Gueth, F.; Guilloteau, S.; Viallefond, F.
1995-03-01
Mosaicing may be used in aperture synthesis to map large fields of view. So far, only MEM techniques have been used to deconvolve mosaic images (Cornwell (1988)). A CLEAN-based method has been developed, in which the components are located in a modified expression. This allows a better utilization of the information and consequent noise reduction in the overlapping regions. Simulations show that this method gives correct clean maps and recovers most of the flux of the sources. The introduction of the short-spacing visibilities in the data set is strongly required. Their absence actually introduces artificial lack of structures on the corresponding scale in the mosaic images. The formation of ``stripes'' in clean maps may also occur, but this phenomenon can be significantly reduced by using the Steer-Dewdney-Ito algorithm (Steer, Dewdney & Ito (1984)) to identify the CLEAN components. Typical IRAM interferometer pointing errors do not have a significant effect on the reconstructed images.
Pay-per-view in interlibrary loan: a case study
Brown, Heather L
2012-01-01
Question: Can purchasing articles from publishers be a cost-effective method of interlibrary loan (ILL) for libraries owing significant copyright royalties? Setting: The University of Nebraska Medical Center's McGoogan Library of Medicine provides the case study. Method: Completed ILL requests that required copyright payment were identified for the first quarter of 2009. The cost of purchasing these articles from publishers was obtained from the publishers' websites and compared to the full ILL cost. A pilot period of purchasing articles from the publisher was then conducted. Results: The first-quarter sample data showed that approximately $500.00 could have been saved if the articles were purchased from the publisher. The pilot period and continued purchasing practice have resulted in significant savings for the library. Conclusion: Purchasing articles directly from the publisher is a cost-effective method for libraries burdened with high copyright royalty payments. PMID:22514505
Kalantari, Mohammad Hassan; Ghoraishian, Seyed Ahmad; Mohaghegh, Mina
2017-01-01
Objective: The aim of this in vitro study was to evaluate the accuracy of shade matching using two spectrophotometric devices. Materials and Methods: Thirteen patients who require a full coverage restoration for one of their maxillary central incisors were selected while the adjacent central incisor was intact. 3 same frameworks were constructed for each tooth using computer-aided design and computer-aided manufacturing technology. Shade matching was performed using Vita Easyshade spectrophotometer, Shadepilot spectrophotometer, and Vitapan classical shade guide for the first, second, and third crown subsequently. After application, firing, and glazing of the porcelain, the color was evaluated and scored by five inspectors. Results: Both spectrophotometric systems showed significantly better results than visual method (P < 0.05) while there were no significant differences between Vita Easyshade and Shadepilot spectrophotometers (P < 0.05). Conclusion: Spectrophotometers are a good substitute for visual color selection methods. PMID:28729792
Center of Excellence for Hypersonics Research
2012-01-25
detailed simulations of actual combustor configurations, and ultimately for the optimization of hypersonic air - breathing propulsion system flow paths... vehicle development programs. The Center engaged leading experts in experimental and computational analysis of hypersonic flows to provide research...advanced hypersonic vehicles and space access systems will require significant advances in the design methods and ground testing techniques to ensure
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-06
... via CDX, optical disc (CD or DVD), and paper. Regardless of the method of submission, EPA will require... support documents (including NOCs), though optical discs may continue to be used. Two years after the effective date of this final rule, optical discs will no longer be accepted, and all submitters must submit...
Regional Educational Strategies-Methods to Promote Human Resource Development in Small Businesses
ERIC Educational Resources Information Center
Knapp, Kornelius; Zschunke, Melanie
2009-01-01
Over the next few decades, demographic change will cause significant changes in the working population. how businesses prepare for these changes will have a decisive impact on whether this transformation has a beneficial or detrimental effect on the economy. Small and medium-sized businesses do not possess the resources required to develop and…
Quasi-Algorithm Methods and Techniques for Specifying Objective Job/Task Performance Requirements
1978-07-01
succeeding experts. While "dottings of i’s and crossings of t’s" may still occur, these trivia no longer significantly affect the course of task...That is, as soon as a branch entered under the assunption that condi- tion A applied was completed, administrator and expert recycled to the
Moderating Factors of Video-Modeling with Other as Model: A Meta-Analysis of Single-Case Studies
ERIC Educational Resources Information Center
Mason, Rose A.; Ganz, Jennifer B.; Parker, Richard I.; Burke, Mack D.; Camargo, Siglia P.
2012-01-01
Video modeling with other as model (VMO) is a more practical method for implementing video-based modeling techniques, such as video self-modeling, which requires significantly more editing. Despite this, identification of contextual factors such as participant characteristics and targeted outcomes that moderate the effectiveness of VMO has not…
Preliminary study of temperature measurement techniques for Stirling engine reciprocating seals
NASA Technical Reports Server (NTRS)
Wilcock, D. F.; Hoogenboom, L.; Meinders, M.; Winer, W. O.
1981-01-01
Methods of determining the contact surface temperature in reciprocating seals are investigated. Direct infrared measurement of surface temperatures of a rod exiting a loaded cap seal or simulated seal are compared with surface thermocouple measurements. Significant cooling of the surface requires several milliseconds so that exit temperatures may be considered representative of internal contact temperatures.
Barroso, Gerardo; Chaya, Miguel; Bolaños, Rubén; Rosado, Yadira; García León, Fernando; Ibarrola, Eduardo
2005-05-01
To evaluate sperm recovery and total sperm motility in three different sperm preparation techniques (density gradient, simple washing and swim-up). A total of 290 subjects were randomly evaluated from November 2001 to March 2003. The density gradient method required Isolate (upper and lower layers). Centrifugation was performed at 400 g for 10 minutes and evaluation was done using the Makler counting chamber. The simple washing method included the use of HTF-M complemented with 7.5% of SSS, with centrifugation at 250 g, obtaining at the end 0.5 mL of the sperm sample. The swim-up method required HTF-M complemented with 7.5% of SSS, with an incubation period of 60 minutes at 37 degrees C. The demographic characteristics evaluated through their standard error, 95% ICC, and 50th percentile were similar. The application of multiple comparison tests and analysis of variance showed significant differences between the sperm preparations before and after capacitation. It was observed a superior recovery rate with the density gradient and swim-up methods; nevertheless, the samples used for the simple washing method showed a diminished sperm recovery from the original sample. Sperm preparation techniques have become very useful in male infertility treatments allowing higher sperm recovery and motility rates. The seminal parameters evaluated from the original sperm sample will determine the best sperm preparation technique in those patients who require it.
Technique for positioning hologram for balancing large data capacity with fast readout
NASA Astrophysics Data System (ADS)
Shimada, Ken-ichi; Hosaka, Makoto; Yamazaki, Kazuyoshi; Onoe, Shinsuke; Ide, Tatsuro
2017-09-01
The technical difficulty of balancing large data capacity with a high data transfer rate in holographic data storage systems (HDSSs) is significantly high because of tight tolerances for physical perturbation. From a system margin perspective in terabyte-class HDSSs, the positioning error of a holographic disc should be within about 10 µm to ensure high readout quality. Furthermore, fine control of the positioning should be accomplished within a time frame of about 10 ms for a high data transfer rate of the Gbps class, while a conventional method based on servo control of spindle or sled motors can rarely satisfy the requirement. In this study, a new compensation method for the effect of positioning error, which precisely controls the positioning of a Nyquist aperture instead of a holographic disc, has been developed. The method relaxes the markedly low positional tolerance of a holographic disc. Moreover, owing to the markedly light weight of the aperture, positioning control within the required time frame becomes feasible.
Efficient Bayesian mixed model analysis increases association power in large cohorts
Loh, Po-Ru; Tucker, George; Bulik-Sullivan, Brendan K; Vilhjálmsson, Bjarni J; Finucane, Hilary K; Salem, Rany M; Chasman, Daniel I; Ridker, Paul M; Neale, Benjamin M; Berger, Bonnie; Patterson, Nick; Price, Alkes L
2014-01-01
Linear mixed models are a powerful statistical tool for identifying genetic associations and avoiding confounding. However, existing methods are computationally intractable in large cohorts, and may not optimize power. All existing methods require time cost O(MN2) (where N = #samples and M = #SNPs) and implicitly assume an infinitesimal genetic architecture in which effect sizes are normally distributed, which can limit power. Here, we present a far more efficient mixed model association method, BOLT-LMM, which requires only a small number of O(MN)-time iterations and increases power by modeling more realistic, non-infinitesimal genetic architectures via a Bayesian mixture prior on marker effect sizes. We applied BOLT-LMM to nine quantitative traits in 23,294 samples from the Women’s Genome Health Study (WGHS) and observed significant increases in power, consistent with simulations. Theory and simulations show that the boost in power increases with cohort size, making BOLT-LMM appealing for GWAS in large cohorts. PMID:25642633
Improved telescope focus using only two focus images
NASA Astrophysics Data System (ADS)
Barrick, Gregory; Vermeulen, Tom; Thomas, James
2008-07-01
In an effort to reduce the amount of time spent focusing the telescope and to improve the quality of the focus, a new procedure has been investigated and implemented at the Canada-France-Hawaii Telescope (CFHT). The new procedure is based on a paper by Tokovinin and Heathcote and requires only two out-of-focus images to determine the best focus for the telescope. Using only two images provides a great time savings over the five or more images required for a standard through-focus sequence. In addition, it has been found that this method is significantly less sensitive to seeing variations than the traditional through-focus procedure, so the quality of the resulting focus is better. Finally, the new procedure relies on a second moment calculation and so is computationally easier and more robust than methods using a FWHM calculation. The new method has been implemented for WIRCam for the past 18 months, for MegaPrime for the past year, and has recently been implemented for ESPaDOnS.
Formal Methods for Verification and Validation of Partial Specifications: A Case Study
NASA Technical Reports Server (NTRS)
Easterbrook, Steve; Callahan, John
1997-01-01
This paper describes our work exploring the suitability of formal specification methods for independent verification and validation (IV&V) of software specifications for large, safety critical systems. An IV&V contractor often has to perform rapid analysis on incomplete specifications, with no control over how those specifications are represented. Lightweight formal methods show significant promise in this context, as they offer a way of uncovering major errors, without the burden of full proofs of correctness. We describe a case study of the use of partial formal models for V&V of the requirements for Fault Detection Isolation and Recovery on the space station. We conclude that the insights gained from formalizing a specification are valuable, and it is the process of formalization, rather than the end product that is important. It was only necessary to build enough of the formal model to test the properties in which we were interested. Maintenance of fidelity between multiple representations of the same requirements (as they evolve) is still a problem, and deserves further study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Weixiong; Wang, Yaqi; DeHart, Mark D.
2016-09-01
In this report, we present a new upwinding scheme for the multiscale capability in Rattlesnake, the MOOSE based radiation transport application. Comparing with the initial implementation of multiscale utilizing Lagrange multipliers to impose strong continuity of angular flux on interface of in-between subdomains, this scheme does not require the particular domain partitioning. This upwinding scheme introduces discontinuity of angular flux and resembles the classic upwinding technique developed for solving first order transport equation using discontinuous finite element method (DFEM) on the subdomain interfaces. Because this scheme restores the causality of radiation streaming on the interfaces, significant accuracy improvement can bemore » observed with moderate increase of the degrees of freedom comparing with the continuous method over the entire solution domain. Hybrid SN-PN is implemented and tested with this upwinding scheme. Numerical results show that the angular smoothing required by Lagrange multiplier method is not necessary for the upwinding scheme.« less
Rational reduction of periodic propagators for off-period observations.
Blanton, Wyndham B; Logan, John W; Pines, Alexander
2004-02-01
Many common solid-state nuclear magnetic resonance problems take advantage of the periodicity of the underlying Hamiltonian to simplify the computation of an observation. Most of the time-domain methods used, however, require the time step between observations to be some integer or reciprocal-integer multiple of the period, thereby restricting the observation bandwidth. Calculations of off-period observations are usually reduced to brute force direct methods resulting in many demanding matrix multiplications. For large spin systems, the matrix multiplication becomes the limiting step. A simple method that can dramatically reduce the number of matrix multiplications required to calculate the time evolution when the observation time step is some rational fraction of the period of the Hamiltonian is presented. The algorithm implements two different optimization routines. One uses pattern matching and additional memory storage, while the other recursively generates the propagators via time shifting. The net result is a significant speed improvement for some types of time-domain calculations.
Hesford, Andrew J.; Waag, Robert C.
2010-01-01
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased. PMID:20835366
NASA Astrophysics Data System (ADS)
Hesford, Andrew J.; Waag, Robert C.
2010-10-01
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.
Hesford, Andrew J; Waag, Robert C
2010-10-20
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.
A study to explore the use of orbital remote sensing to determine native arid plant distribution
NASA Technical Reports Server (NTRS)
Mcginnies, W. G. (Principal Investigator); Haase, E. F.; Musick, H. B. (Compiler)
1973-01-01
The author has identified the following significant results. A theory has been developed of a method for determining the reflectivities of natural areas from ERTS-1 data. This method requires the following measurements: (1) ground truth reflectivity data from two different calibration areas; (2) radiance data from ERTS-1 MSS imagery for the same two calibration areas; and (3) radiance data from ERTS-1 MSS imagery for the area(s) in which reflectivity is to be determined. The method takes into account sun angle effects and atmospheric effects on the radiance seen by the space sensor. If certain assumptions are made, the ground truth data collection need not be simultaneous with the ERTS-1 overflight. The method allows the calculation of a conversion factor for converting ERTS-1 MSS radiance measurements of a given overflight to reflectivity values. This conversion factor can be used to determine the reflectivity of any area in the general vicinity of the calibration areas which has a relatively similar overlying atmosphere. This method, or some modification of it, may be useful in ERTS investigations which require the determination of spectral signatures of areas from spacecraft data.
Elevator ride comfort monitoring and evaluation using smartphones
NASA Astrophysics Data System (ADS)
Zhang, Yang; Sun, Xiaowei; Zhao, Xuefeng; Su, Wensheng
2018-05-01
With rapid urbanization, the demand for elevators is increasing, and their level of safety and ride comfort under vibrating conditions has also aroused interest. It is therefore essential to monitor the ride comfort level of elevators. The traditional method for such monitoring depends significantly on regular professional inspections, and requires expensive equipment and professional skill. With this regard, a new method for elevator ride comfort monitoring using a smartphone is demonstrated herein in detail. A variety of high-precision sensors are installed in a smartphone with strong data processing and telecommunication capabilities. A series of validation tests were designed and completed, and the international organization for standardization ISO2631-1997 was applied to evaluate the level of elevator ride comfort. Experimental results indicate that the proposed method is stable and reliable, its precision meets the engineering requirements, and the elevator ride comfort level can be accurately monitored under various situations. The method is very economical and convenient, and provides the possibility for the public to participate in elevator ride comfort monitoring. In addition, the method can both provide a wide range of data support and eliminate data errors to a certain extent.
Mitigating reentry radio blackout by using a traveling magnetic field
NASA Astrophysics Data System (ADS)
Zhou, Hui; Li, Xiaoping; Xie, Kai; Liu, Yanming; Yu, Yuanyuan
2017-10-01
A hypersonic flight or a reentry vehicle is surrounded by a plasma layer that prevents electromagnetic wave transmission, which results in radio blackout. The magnetic-window method is considered a promising means to mitigate reentry communication blackout. However, the real application of this method is limited because of the need for strong magnetic fields. To reduce the required magnetic field strength, a novel method that applies a traveling magnetic field (TMF) is proposed in this study. A mathematical model based on magneto-hydrodynamic theory is adopted to analyze the effect of TMF on plasma. The mitigating effects of the TMF on the blackout of typical frequency bands, including L-, S-, and C-bands, are demonstrated. Results indicate that a significant reduction of plasma density occurs in the magnetic-window region by applying a TMF, and the reduction ratio is positively correlated with the velocity of the TMF. The required traveling velocities for eliminating the blackout of the Global Positioning System (GPS) and the typical telemetry system are also discussed. Compared with the constant magnetic-window method, the TMF method needs lower magnetic field strength and is easier to realize in the engineering field.
Cassini, Rudi; Scremin, Mara; Contiero, Barbara; Drago, Andrea; Vettorato, Christian; Marcer, Federica; di Regalbono, Antonio Frangipane
2016-06-01
Ambient insecticides are receiving increasing attention in many developed countries because of their value in reducing mosquito nuisance. As required by the European Union Biocidal Products Regulation 528/2012, these devices require appropriate testing of their efficacy, which is based on estimating the knockdown and mortality rates of free-flying (free) mosquitoes in a test room. However, evaluations using free mosquitoes present many complexities. The performances of 6 alternative methods with mosquitoes held in 2 different cage designs (steel wire and gauze/plastic) with and without an operating fan for air circulation were monitored in a test room through a closed-circuit television system and were compared with the currently recommended method using free mosquitoes. Results for caged mosquitoes without a fan showed a clearly delayed knockdown effect, whereas outcomes for caged mosquitoes with a fan recorded higher mortality at 24 h, compared to free mosquitoes. Among the 6 methods, cages made of gauze and plastic operating with fan wind speed at 2.5-2.8 m/sec was the only method without a significant difference in results for free mosquitoes, and therefore appears as the best alternative to assess knockdown by ambient insecticides accurately.
Spatial Statistics for Tumor Cell Counting and Classification
NASA Astrophysics Data System (ADS)
Wirjadi, Oliver; Kim, Yoo-Jin; Breuel, Thomas
To count and classify cells in histological sections is a standard task in histology. One example is the grading of meningiomas, benign tumors of the meninges, which requires to assess the fraction of proliferating cells in an image. As this process is very time consuming when performed manually, automation is required. To address such problems, we propose a novel application of Markov point process methods in computer vision, leading to algorithms for computing the locations of circular objects in images. In contrast to previous algorithms using such spatial statistics methods in image analysis, the present one is fully trainable. This is achieved by combining point process methods with statistical classifiers. Using simulated data, the method proposed in this paper will be shown to be more accurate and more robust to noise than standard image processing methods. On the publicly available SIMCEP benchmark for cell image analysis algorithms, the cell count performance of the present paper is significantly more accurate than results published elsewhere, especially when cells form dense clusters. Furthermore, the proposed system performs as well as a state-of-the-art algorithm for the computer-aided histological grading of meningiomas when combined with a simple k-nearest neighbor classifier for identifying proliferating cells.
NASA Astrophysics Data System (ADS)
Triadhi, U.; Zulfikar, M. A.; Setiyanto, H.; Amran, M. B.
2018-05-01
MISPE (molecularly imprinted Solid Phase Extraction) is a separation technique using a solid adsorbent as a principle of MI (molecularly imprinted). Methacrylic acid (MAA) was used as a monomer, ethylene glycol dimethacrylate (EGDMA) as a crosslinker, benzoyl peroxide (BPO) as an initiator and acetonitrile (ACN) as a porogen. Catechin will be used as the template. Thermal and microwave methods were employed in the synthesis method. When analyzed using FTIR spectra, it was found that there were no significant differences between NIP (non-imprinted polymer) resulting from thermal method and that resulting from microwave method. Preparation of polymers by microwave method required 4 mins at 60-65 °C, significantly less than thermal method, that took 60 minutes at the same temperature. The variations of mole ratios of the monomer, the crosslinker, and the initiator were also performed. Based on the FTIR spectra, intensity of some peaks were changed due to the decreases of concentration. The optimum composition for NIP synthesis was MAA: EGDMA: BPO ratio of 5:30:0.5 (in mmole). The TGA curve showed that the NIP sythesized using microwave method experienced mass loss of around 98.50% at 604.8 °C.
Maric, Marija; de Haan, Else; Hogendoorn, Sanne M; Wolters, Lidewij H; Huizenga, Hilde M
2015-03-01
Single-case experimental designs are useful methods in clinical research practice to investigate individual client progress. Their proliferation might have been hampered by methodological challenges such as the difficulty applying existing statistical procedures. In this article, we describe a data-analytic method to analyze univariate (i.e., one symptom) single-case data using the common package SPSS. This method can help the clinical researcher to investigate whether an intervention works as compared with a baseline period or another intervention type, and to determine whether symptom improvement is clinically significant. First, we describe the statistical method in a conceptual way and show how it can be implemented in SPSS. Simulation studies were performed to determine the number of observation points required per intervention phase. Second, to illustrate this method and its implications, we present a case study of an adolescent with anxiety disorders treated with cognitive-behavioral therapy techniques in an outpatient psychotherapy clinic, whose symptoms were regularly assessed before each session. We provide a description of the data analyses and results of this case study. Finally, we discuss the advantages and shortcomings of the proposed method. Copyright © 2014. Published by Elsevier Ltd.
Predictors of no-scalpel vasectomy acceptance in Karimnagar district, Andhra Pradesh.
Valsangkar, Sameer; Sai, Surendranath K; Bele, Samir D; Bodhare, Trupti N
2012-07-01
Karimnagar District has consistently achieved highest rates of no-scalpel vasectomy (NSV) in the past decade when compared to state and national rates. This study was conducted to elucidate the underlying causes for higher acceptance of NSV in the district. A community-based, case control study was conducted. Sampling techniques used were purposive and simple random sampling. A semi-structured questionnaire was used to evaluate the socio-demographic, family characteristics, contraceptive history and predictors of contraceptive choice in 116 NSV acceptors and 120 other contraceptive users (OCUs). Postoperative complications and experiences were ascertained in NSV acceptors. Age (χ(2)=11.79, P value = 0.008), literacy (χ(2)=17.95, P value = 0.03), duration of marriage (χ(2)=14.23, P value = 0.008) and number of children (χ(2)=10.45, P value = 0.01) were significant for acceptance of NSV. Among the predictors, method suggested by peer/ health worker (OR = 1.5, P value = 0.01), method does not require regular intervention (OR = 1.3, P value = 0.004) and permanence of the method (OR = 1.2, P value = 0.031) were significant. Acceptors were most satisfied with the shorter duration required to return to work and the most common complication was persistent postoperative pain among 12 (10.34%) of the acceptors. Advocating and implementing family planning is of high significance in view of the population growth in India and drawing from the demographic profile, predictors, pool of trainers and experiences in Karimnagar District, a similar achievement of higher rates of this simple procedure with few complications can be replicated.
Meaney, Peter A.; Sutton, Robert M.; Tsima, Billy; Steenhoff, Andrew P.; Shilkofski, Nicole; Boulet, John R.; Davis, Amanda; Kestler, Andrew M.; Church, Kasey K.; Niles, Dana E.; Irving, Sharon Y.; Mazhani, Loeto; Nadkarni, Vinay M.
2013-01-01
Objective Globally, one third of deaths each year are from cardiovascular diseases, yet no strong evidence supports any specific method of CPR instruction in a resource-limited setting. We hypothesized that both existing and novel CPR training programs significantly impact skills of hospital-based healthcare providers (HCP) in Botswana. Methods HCP were prospectively randomized to 3 training groups: instructor led, limited instructor with manikin feedback, or self-directed learning. Data was collected prior to training, immediately after and at 3 and 6 months. Excellent CPR was prospectively defined as having at least 4 of 5 characteristics: depth, rate, release, no flow fraction, and no excessive ventilation. GEE was performed to account for within subject correlation. Results Of 214 HCP trained, 40% resuscitate ≥1/month, 28% had previous formal CPR training, and 65% required additional skills remediation to pass using AHA criteria. Excellent CPR skill acquisition was significant (infant: 32% vs. 71%, p < 0.01; adult 28% vs. 48%, p < 0.01). Infant CPR skill retention was significant at 3 (39% vs. 70%, p < 0.01) and 6 months (38% vs. 67%, p < 0.01), and adult CPR skills were retained to 3 months (34% vs. 51%, p = 0.02). On multivariable analysis, low cognitive score and need for skill remediation, but not instruction method, impacted CPR skill performance. Conclusions HCP in resource-limited settings resuscitate frequently, with little CPR training. Using existing training, HCP acquire and retain skills, yet often require remediation. Novel techniques with increased student: instructor ratio and feedback manikins were not different compared to traditional instruction. PMID:22561463
Chronic nailbiting: a controlled comparison of competing response and mild aversion treatments.
Allen, K W
1996-03-01
Recent studies have suggested that competing response, an abridged version of Azrin and Nunn's (1973) habit reversal method (Behaviour Research and Therapy, 11, 619-628), is a key component in the treatment of chronic nailbiting (Horne & Wilkinson, 1980, Behaviour Research and Therapy, 18, 287-291; Silber & Haynes, 1992, Behaviour Research and Therapy, 30, 15-22). This study replicated and extended the latter by adding an 8 week follow-up period and by using a non-student sample. Forty-five chronic nailbiter Ss were divided into three experimental groups. One method involved the use of mild aversion in which Ss painted a bitter substance on their nails. A second method required the subject to perform a competing response whenever they had the urge to nailbite or found themselves biting their nails. Both methods included self-monitoring of the behaviour and a third group of Ss performed self-monitoring alone as a control condition. The study lasted 12 weeks. Mild aversion resulted in significant improvements in nail length, with the competing response method just failing to show significance in this regard. There was no significant improvement for the control group. The implications for further study and the benefits of competing response in the light of these findings are discussed in terms of treatment success and use of therapist time.
NASA Astrophysics Data System (ADS)
Han, Dongmei; Xu, Xinyi; Yan, Denghua
2016-04-01
In recent years, global climate change has significantly caused a serious crisis of water resources throughout the world. However, mainly through variations in temperature, climate change will affect water requirements of crop. It is obvious that the rise of temperature affects growing period and phenological period of crop directly, then changes the water demand quota of crop. Methods including accumulated temperature threshold and climatic tendency rate were adopted, which made up for the weakness of phenological observations, to reveal the response of crop phenological change during the growing period. Then using Penman-Menteith model and crop coefficients from the United Nations Food& Agriculture Organization (FAO), the paper firstly explored crop water requirements in different growth periods, and further forecasted quantitatively crop water requirements in Heihe River Basin, China under different climate change scenarios. Results indicate that: (i) The results of crop phenological change established in the method of accumulated temperature threshold were in agreement with measured results, and (ii) there were many differences in impacts of climate warming on water requirement of different crops. The growth periods of wheat and corn had tendency of shortening as well as the length of growth periods. (ii)Results of crop water requirements under different climate change scenarios showed: when temperature increased by 1°C, the start time of wheat growth period changed, 2 days earlier than before, and the length of total growth period shortened 2 days. Wheat water requirements increased by 1.4mm. However, corn water requirements decreased by almost 0.9mm due to the increasing temperature of 1°C. And the start time of corn growth period become 3 days ahead, and the length of total growth period shortened 4 days. Therefore, the contradiction between water supply and water demands are more obvious under the future climate warming in Heihe River Basin, China.
Big questions, big science: meeting the challenges of global ecology.
Schimel, David; Keller, Michael
2015-04-01
Ecologists are increasingly tackling questions that require significant infrastucture, large experiments, networks of observations, and complex data and computation. Key hypotheses in ecology increasingly require more investment, and larger data sets to be tested than can be collected by a single investigator's or s group of investigator's labs, sustained for longer than a typical grant. Large-scale projects are expensive, so their scientific return on the investment has to justify the opportunity cost-the science foregone because resources were expended on a large project rather than supporting a number of individual projects. In addition, their management must be accountable and efficient in the use of significant resources, requiring the use of formal systems engineering and project management to mitigate risk of failure. Mapping the scientific method into formal project management requires both scientists able to work in the context, and a project implementation team sensitive to the unique requirements of ecology. Sponsoring agencies, under pressure from external and internal forces, experience many pressures that push them towards counterproductive project management but a scientific community aware and experienced in large project science can mitigate these tendencies. For big ecology to result in great science, ecologists must become informed, aware and engaged in the advocacy and governance of large ecological projects.
NASA Astrophysics Data System (ADS)
Tay, Wei Choon; Tan, Eng Leong
2014-07-01
In this paper, we have proposed a pentadiagonal alternating-direction-implicit (Penta-ADI) finite-difference time-domain (FDTD) method for the two-dimensional Schrödinger equation. Through the separation of complex wave function into real and imaginary parts, a pentadiagonal system of equations for the ADI method is obtained, which results in our Penta-ADI method. The Penta-ADI method is further simplified into pentadiagonal fundamental ADI (Penta-FADI) method, which has matrix-operator-free right-hand-sides (RHS), leading to the simplest and most concise update equations. As the Penta-FADI method involves five stencils in the left-hand-sides (LHS) of the pentadiagonal update equations, special treatments that are required for the implementation of the Dirichlet's boundary conditions will be discussed. Using the Penta-FADI method, a significantly higher efficiency gain can be achieved over the conventional Tri-ADI method, which involves a tridiagonal system of equations.
NASA Astrophysics Data System (ADS)
Jimichi, Takushi; Fujita, Hideaki; Akagi, Hirofumi
This paper deals with a dynamic voltage restorer (DVR) characterized by installing the shunt converter at the load side. The DVR can compensate for the load voltage when a voltage sag appears in the supply voltage. An existing DVR requires a large capacitor bank or other energy-storage elements such as double-layer capacitors or batteries. The DVR presented in this paper requires only a small dc capacitor intended for smoothing the dc-link voltage. Moreover, three control methods for the series converter are compared and discussed to reduce the series-converter rating, paying attention to the zero-sequence voltages included in the supply voltage and the compensating voltage. Experimental results obtained from a 200-V, 5-kW laboratory system are shown to verify the viability of the system configuration and the control methods.
Banasiuk, Rafał; Frackowiak, Joanna E; Krychowiak, Marta; Matuszewska, Marta; Kawiak, Anna; Ziabka, Magdalena; Lendzion-Bielun, Zofia; Narajczyk, Magdalena; Krolicka, Aleksandra
2016-01-01
A fast, economical, and reproducible method for nanoparticle synthesis has been developed in our laboratory. The reaction is performed in an aqueous environment and utilizes light emitted by commercially available 1 W light-emitting diodes (λ =420 nm) as the catalyst. This method does not require nanoparticle seeds or toxic chemicals. The irradiation process is carried out for a period of up to 10 minutes, significantly reducing the time required for synthesis as well as environmental impact. By modulating various reaction parameters silver nanoparticles were obtained, which were predominantly either spherical or cubic. The produced nanoparticles demonstrated strong antimicrobial activity toward the examined bacterial strains. Additionally, testing the effect of silver nanoparticles on the human keratinocyte cell line and human peripheral blood mononuclear cells revealed that their cytotoxicity may be limited by modulating the employed concentrations of nanoparticles.
Weng, Xiao-chuan; Zhou, Liang; Fu, Yin-yan; Zhu, Sheng-mei; He, Hui-liang; Wu, Jian
2005-01-01
Objective: To compare the dose requirements of continuous infusion of rocuronium and atracurium throughout orthotopic liver transplantation (OLT) in humans. Methods: Twenty male patients undergoing liver transplantation were randomly assigned to two comparable groups of 10 patients each to receive a continuous infusion of rocuronium or atracurium under intravenous balanced anesthesia. The response of adductor pollicis to train-of-four (TOF) stimulation of unlar nerve was monitored. The infusion rates of rocuronium and atracurium were adjusted to maintain T1/Tc ratio of 2%~10%. The total dose of each drug given during each of the three phases of OLT was recorded. Results: Rocuronium requirement, which were (0.468±0.167) mg/(kg·h) during the paleohepatic phase, decreased significantly during the anhepatic phase to (0.303±0.134) mg/(kg·h) and returned to the initial values at the neohepatic period ((0.429±0.130) mg/(kg·h)); whereas atracuruim requirements remained unchanged during orthotopic liver transplantation. Conclusions: This study showed that the exclusion of the liver from the circulation results in the significantly reduced requirement of rocuronium while the requirement of atracurium was not changed, which suggests that the liver is of major importance in the clearance of rocuronium. A continuous infusion of atracurium with constant rate can provide stable neuromuscular blockade during the three stages of OLT. PMID:16130187
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yost, Shane R.; Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu; Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720
2016-08-07
In this paper we introduce two size consistent forms of the non-orthogonal configuration interaction with second-order Møller-Plesset perturbation theory method, NOCI-MP2. We show that the original NOCI-MP2 formulation [S. R. Yost, T. Kowalczyk, and T. VanVoorh, J. Chem. Phys. 193, 174104 (2013)], which is a perturb-then-diagonalize multi-reference method, is not size consistent. We also show that this causes significant errors in large systems like the linear acenes. By contrast, the size consistent versions of the method give satisfactory results for singlet and triplet excited states when compared to other multi-reference methods that include dynamic correlation. For NOCI-MP2 however, the numbermore » of required determinants to yield similar levels of accuracy is significantly smaller. These results show the promise of the NOCI-MP2 method, though work still needs to be done in creating a more consistent black-box approach to computing the determinants that comprise the many-electron NOCI basis.« less
Priorities for development of research methods in occupational cancer.
Ward, Elizabeth M; Schulte, Paul A; Bayard, Steve; Blair, Aaron; Brandt-Rauf, Paul; Butler, Mary Ann; Dankovic, David; Hubbs, Ann F; Jones, Carol; Karstadt, Myra; Kedderis, Gregory L; Melnick, Ronald; Redlich, Carrie A; Rothman, Nathaniel; Savage, Russell E; Sprinker, Michael; Toraason, Mark; Weston, Ainsley; Olshan, Andrew F; Stewart, Patricia; Zahm, Sheila Hoar
2003-01-01
Occupational cancer research methods was identified in 1996 as 1 of 21 priority research areas in the National Occupational Research Agenda (NORA). To implement NORA, teams of experts from various sectors were formed and given the charge to further define research needs and develop strategies to enhance or augment research in each priority area. This article is a product of that process. Focus on occupational cancer research methods is important both because occupational factors play a significant role in a number of cancers, resulting in significant morbidity and mortality, and also because occupational cohorts (because of higher exposure levels) often provide unique opportunities to evaluate health effects of environmental toxicants and understand the carcinogenic process in humans. Despite an explosion of new methods for cancer research in general, these have not been widely applied to occupational cancer research. In this article we identify needs and gaps in occupational cancer research methods in four broad areas: identification of occupational carcinogens, design of epidemiologic studies, risk assessment, and primary and secondary prevention. Progress in occupational cancer will require interdisciplinary research involving epidemiologists, industrial hygienists, toxicologists, and molecular biologists. PMID:12524210
Transumbilical single port laparoscopic surgery for the treatment of concomitant disease.
Lee, Jun Suh; Hong, Tae Ho; Park, Byung Joon; Kim, Jin Jo
2013-06-01
We report our experience of transumbilical single port laparoscopic surgery (TUSPLS) for multiple concomitant intraabdominal pathologies, and assess the feasibility of this technique with several technical tips. Various combined procedures using TUSPLS were performed since April, 2008. All records of concomitant laparoscopic procedures using TUSPLS were searched at three hospitals. Forty-one patients underwent 82 combined procedures using TUSPLS in a single session. The perioperative outcomes of simultaneously performed cholecystectomy and ovarian cystectomy using TUSPLS (n = 14) are compared with those of using CLS (n = 11). The operating time was significantly longer with the TUSPLS method than with the CLS method. However, postoperative convalescent outcomes such as postoperative hospital stay, VAS pain score, and required analgesics showed no differences between the two methods. Also, there were no significant operative complications associated with the two methods. Fewer trocars were used with the TUSPLS method. Combined laparoscopic procedures for various concomitant pathologies in the abdomen can be performed using transumbilical single port laparoscopic surgery without increasing morbidity or hospital stay in patients with acceptable risk.
SSAW: A new sequence similarity analysis method based on the stationary discrete wavelet transform.
Lin, Jie; Wei, Jing; Adjeroh, Donald; Jiang, Bing-Hua; Jiang, Yue
2018-05-02
Alignment-free sequence similarity analysis methods often lead to significant savings in computational time over alignment-based counterparts. A new alignment-free sequence similarity analysis method, called SSAW is proposed. SSAW stands for Sequence Similarity Analysis using the Stationary Discrete Wavelet Transform (SDWT). It extracts k-mers from a sequence, then maps each k-mer to a complex number field. Then, the series of complex numbers formed are transformed into feature vectors using the stationary discrete wavelet transform. After these steps, the original sequence is turned into a feature vector with numeric values, which can then be used for clustering and/or classification. Using two different types of applications, namely, clustering and classification, we compared SSAW against the the-state-of-the-art alignment free sequence analysis methods. SSAW demonstrates competitive or superior performance in terms of standard indicators, such as accuracy, F-score, precision, and recall. The running time was significantly better in most cases. These make SSAW a suitable method for sequence analysis, especially, given the rapidly increasing volumes of sequence data required by most modern applications.
Incorporating the gas analyzer response time in gas exchange computations.
Mitchell, R R
1979-11-01
A simple method for including the gas analyzer response time in the breath-by-breath computation of gas exchange rates is described. The method uses a difference equation form of a model for the gas analyzer in the computation of oxygen uptake and carbon dioxide production and avoids a numerical differentiation required to correct the gas fraction wave forms. The effect of not accounting for analyzer response time is shown to be a 20% underestimation in gas exchange rate. The present method accurately measures gas exchange rate, is relatively insensitive to measurement errors in the analyzer time constant, and does not significantly increase the computation time.
NASA Astrophysics Data System (ADS)
Sharan, A. M.; Sankar, S.; Sankar, T. S.
1982-08-01
A new approach for the calculation of response spectral density for a linear stationary random multidegree of freedom system is presented. The method is based on modifying the stochastic dynamic equations of the system by using a set of auxiliary variables. The response spectral density matrix obtained by using this new approach contains the spectral densities and the cross-spectral densities of the system generalized displacements and velocities. The new method requires significantly less computation time as compared to the conventional method for calculating response spectral densities. Two numerical examples are presented to compare quantitatively the computation time.
Improved profiling of estrogen metabolites by orbitrap LC/MS
Li, Xingnan; Franke, Adrian A.
2015-01-01
Estrogen metabolites are important biomarkers to evaluate cancer risks and metabolic diseases. Due to their low physiological levels, a sensitive and accurate method is required, especially for the quantitation of unconjugated forms of endogenous steroids and their metabolites in humans. Here, we evaluated various derivatives of estrogens for improved analysis by orbitrap LC/MS in human serum samples. A new chemical derivatization reagent was applied modifying phenolic steroids to form 1-methylimidazole-2-sulfonyl adducts. The method significantly improves the sensitivity 2–100 fold by full scan MS and targeted selected ion monitoring MS over other derivatization methods including, dansyl, picolinoyl, and pyridine-3-sulfonyl products. PMID:25543003
In situ methods for Li-ion battery research: A review of recent developments
NASA Astrophysics Data System (ADS)
Harks, P. P. R. M. L.; Mulder, F. M.; Notten, P. H. L.
2015-08-01
A considerable amount of research is being directed towards improving lithium-ion batteries in order to meet today's market demands. In particular in situ investigations of Li-ion batteries have proven extremely insightful, but require the electrochemical cell to be fully compatible with the conditions of the testing method and are therefore often challenging to execute. Advantageously, in the past few years significant progress has been made with new, more advanced, in situ techniques. Herein, a comprehensive overview of in situ methods for studying Li-ion batteries is given, with the emphasis on new developments and reported experimental highlights.
Improved Frame Mode Selection for AMR-WB+ Based on Decision Tree
NASA Astrophysics Data System (ADS)
Kim, Jong Kyu; Kim, Nam Soo
In this letter, we propose a coding mode selection method for the AMR-WB+ audio coder based on a decision tree. In order to reduce computation while maintaining good performance, decision tree classifier is adopted with the closed loop mode selection results as the target classification labels. The size of the decision tree is controlled by pruning, so the proposed method does not increase the memory requirement significantly. Through an evaluation test on a database covering both speech and music materials, the proposed method is found to achieve a much better mode selection accuracy compared with the open loop mode selection module in the AMR-WB+.
Recording 2-D Nutation NQR Spectra by Random Sampling Method
Sinyavsky, Nikolaj; Jadzyn, Maciej; Ostafin, Michal; Nogaj, Boleslaw
2010-01-01
The method of random sampling was introduced for the first time in the nutation nuclear quadrupole resonance (NQR) spectroscopy where the nutation spectra show characteristic singularities in the form of shoulders. The analytic formulae for complex two-dimensional (2-D) nutation NQR spectra (I = 3/2) were obtained and the condition for resolving the spectral singularities for small values of an asymmetry parameter η was determined. Our results show that the method of random sampling of a nutation interferogram allows significant reduction of time required to perform a 2-D nutation experiment and does not worsen the spectral resolution. PMID:20949121
A VLSI architecture for performing finite field arithmetic with reduced table look-up
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Truong, T. K.; Reed, I. S.
1986-01-01
A new table look-up method for finding the log and antilog of finite field elements has been developed by N. Glover. In his method, the log and antilog of a field element is found by the use of several smaller tables. The method is based on a use of the Chinese Remainder Theorem. The technique often results in a significant reduction in the memory requirements of the problem. A VLSI architecture is developed for a special case of this new algorithm to perform finite field arithmetic including multiplication, division, and the finding of an inverse element in the finite field.
Aeronautical Industry Requirements for Titanium Alloys
NASA Astrophysics Data System (ADS)
Bran, D. T.; Elefterie, C. F.; Ghiban, B.
2017-06-01
The project presents the requirements imposed for aviation components made from Titanium based alloys. A significant portion of the aircraft pylons are manufactured from Titanium alloys. Strength, weight, and reliability are the primary factors to consider in aircraft structures. These factors determine the requirements to be met by any material used to construct or repair the aircraft. Many forces and structural stresses act on an aircraft when it is flying and when it is static and this thesis describes environmental factors, conditions of external aggression, mechanical characteristics and loadings that must be satisfied simultaneously by a Ti-based alloy, compared to other classes of aviation alloys (as egg. Inconel super alloys, Aluminum alloys).For this alloy class, the requirements are regarding strength to weight ratio, reliability, corrosion resistance, thermal expansion and so on. These characteristics additionally continue to provide new opportunities for advanced manufacturing methods.
[Cancer nursing care education programs: the effectiveness of different teaching methods].
Cheng, Yun-Ju; Kao, Yu-Hsiu
2012-10-01
In-service education affects the quality of cancer care directly. Using classroom teaching to deliver in-service education is often ineffective due to participants' large workload and shift requirements. This study evaluated the learning effectiveness of different teaching methods in the dimensions of knowledge, attitude, and learning satisfaction. This study used a quasi-experimental study design. Participants were cancer ward nurses working at one medical center in northern Taiwan. Participants were divided into an experimental group and control group. The experimental group took an e-learning course and the control group took a standard classroom course using the same basic course material. Researchers evaluated the learning efficacy of each group using a questionnaire based on the quality of cancer nursing care learning effectiveness scale. All participants answered the questionnaire once before and once after completing the course. (1) Post-test "knowledge" scores for both groups were significantly higher than pre-test scores for both groups. Post-test "attitude" scores were significantly higher for the control group, while the experimental group reported no significant change. (2) after a covariance analysis of the pre-test scores for both groups, the post-test score for the experimental group was significantly lower than the control group in the knowledge dimension. Post-test scores did not differ significantly from pre-test scores for either group in the attitude dimension. (3) Post-test satisfaction scores between the two groups did not differ significantly with regard to teaching methods. The e-learning method, however, was demonstrated as more flexible than the classroom teaching method. Study results demonstrate the importance of employing a variety of teaching methods to instruct clinical nursing staff. We suggest that both classroom teaching and e-learning instruction methods be used to enhance the quality of cancer nursing care education programs. We also encourage that interactivity between student and instructor be incorporated into e-learning course designs to enhance effectiveness.
Evaluating Practice-Based Learning and Improvement: Efforts to Improve Acceptance of Portfolios
Fragneto, Regina Y.; DiLorenzo, Amy Noel; Schell, Randall M.; Bowe, Edwin A.
2010-01-01
Introduction The Accreditation Council for Graduate Medical Education (ACGME) recommends resident portfolios as 1 method for assessing competence in practice-based learning and improvement. In July 2005, when anesthesiology residents in our department were required to start a portfolio, the residents and their faculty advisors did not readily accept this new requirement. Intensive education efforts addressing the goals and importance of portfolios were undertaken. We hypothesized that these educational efforts improved acceptance of the portfolio and retrospectively audited the portfolio evaluation forms completed by faculty advisors. Methods Intensive education about the goals and importance of portfolios began in January 2006, including presentations at departmental conferences and one-on-one education sessions. Faculty advisors were instructed to evaluate each resident's portfolio and complete a review form. We retrospectively collected data to determine the percentage of review forms completed by faculty. The portfolio reviews also assessed the percentage of 10 required portfolio components residents had completed. Results Portfolio review forms were completed by faculty advisors for 13% (5/38) of residents during the first advisor-advisee meeting in December 2005. Initiation of intensive education efforts significantly improved compliance, with review forms completed for 68% (26/38) of residents in May 2006 (P < .0001) and 95% (36/38) in December 2006 (P < .0001). Residents also significantly improved the completeness of portfolios between May and December of 2006. Discussion Portfolios are considered a best methods technique by the ACGME for evaluation of practice-based learning and improvment. We have found that intensive education about the goals and importance of portfolios can enhance acceptance of this evaluation tool, resulting in improved compliance in completion and evaluation of portfolios. PMID:22132291
Redlich, A; Köppe, I
2001-11-01
A new technical variant of caesarean section was described a few years ago, which is characterised by blunt surgical preparation and simplified seam technique. A prospective investigation compared the differences in the surgery and postoperative process as well as the rate of complications between this Misgav Ladach method and the conventional technique of Sectio. The individual postoperative well-being of the women was recorded by visual analog scales. - Women, whom realize the including criterias (first caesarean section, >/= 32. week of pregnancy, one baby), were examined in this study over one year: 105 patients operated with the Misgav Ladach method and 67 conventionally operated patients. The patients were randomized in a function of the first letter of the surname (A-K: Misgav-Ladach method; L-Z: classical technique). - The surgical time from the cut to the seam was significantly shorter (29.8 vs. 49.3 min; p < 0,001) in the Misgav Ladach group. There were no differences between the two methods in the rate of postoperative complications. The febrile morbidity was equivalent in both groups (7.6 % vs. 9 %), likewise the frequency of postoperative hematomas (3.8 % vs. 3 %). The postoperative period with consumption of analgetics was significantly longer in the group of conventionally operated patients (1.9 d vs. 2.4 d; p < 0.01). The postoperative presentness was estimated significantly better (p < 0,.01) by the patients of the Misgav ladach group - probably caused by the significantly earlier mobilization (p < 0.05). - The surgical technique described by Misgav and Ladach allows a safe execution of the caesarean section and represents an alternative to the conventional method. The duration of operation (cut-seam-time) was significantly shorter. The technique of less traumatising of tissue caused a significantly earlier mobilisation and a significantly shorter requirement of analgetics. The women estimated her postoperative physical condition as better.
Goswami, Jyotirup; Patra, Niladri B.; Sarkar, Biplab; Basu, Ayan; Pal, Santanu
2013-01-01
Background and Purpose: Conventional portals, based on bony anatomy, for external beam radiotherapy for cervical cancer have been repeatedly demonstrated as inadequate. Conversely, with image-based conformal radiotherapy, better target coverage may be offset by the greater toxicities and poorer compliance associated with treating larger volumes. This study was meant to dosimetrically compare conformal and conventional radiotherapy. Materials and Methods: Five patients of carcinoma cervix underwent planning CT scan with IV contrast and targets, and organs at risk (OAR) were contoured. Two sets of plans-conventional and conformal were generated for each patient. Field sizes were recorded, and dose volume histograms of both sets of plans were generated and compared on the basis of target coverage and OAR sparing. Results: Target coverage was significantly improved with conformal plans though field sizes required were significantly larger. On the other hand, dose homogeneity was not significantly improved. Doses to the OARs (rectum, urinary bladder, and small bowel) were not significantly different across the 2 arms. Conclusion: Three-dimensional conformal radiotherapy gives significantly better target coverage, which may translate into better local control and survival. On the other hand, it also requires significantly larger field sizes though doses to the OARs are not significantly increased. PMID:24455584
Use of various contraceptive methods and time of conception in a community-based population.
Kaplan, Boris; Nahum, Ravit; Yairi, Yael; Hirsch, Michael; Pardo, Josef; Yogev, Yariv; Orvieto, Raoul
2005-11-01
To investigate the association between method of contraception and time to conception in a normal community-based population. Prospective, cross-sectional, survey. Large comprehensive ambulatory women's health center. One thousand pregnant women at their first prenatal obstetrics visit were asked to complete a self-report questionnaire. The return to fertility was analyzed by type of contraceptive method, duration of use, and other sociodemographic variables. Response rate was 80% (n=798). Mean age of the patients was 29.9+/-5 years. Seventy-five percent had used a contraceptive before trying to conceive: 80% oral contraceptives, 8% intrauterine device, and 7% barrier methods. Eighty-six percent conceived spontaneously. Contraceptive users had a significantly higher conception rate than nonusers in the first 3 months from their first attempt at pregnancy. Type of contraception was significantly correlated with time to conception. Pregnancy rates within 6 months of the first attempt was 60% for oral contraceptive users compared to 70 and 81% for the intrauterine device and barrier method groups, respectively. There was no correlation between time to conception and parity or duration of contraceptive use. Other factors found to be significantly related to time to conception were older age of both partners and higher body mass index. Contraception use before a planned pregnancy does not appear to affect ease of conception. Type of method used, although not duration of use, may influence the time required to conceive.
Modeling of shock wave propagation in large amplitude ultrasound.
Pinton, Gianmarco F; Trahey, Gregg E
2008-01-01
The Rankine-Hugoniot relation for shock wave propagation describes the shock speed of a nonlinear wave. This paper investigates time-domain numerical methods that solve the nonlinear parabolic wave equation, or the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation, and the conditions they require to satisfy the Rankine-Hugoniot relation. Two numerical methods commonly used in hyperbolic conservation laws are adapted to solve the KZK equation: Godunov's method and the monotonic upwind scheme for conservation laws (MUSCL). It is shown that they satisfy the Rankine-Hugoniot relation regardless of attenuation. These two methods are compared with the current implicit solution based method. When the attenuation is small, such as in water, the current method requires a degree of grid refinement that is computationally impractical. All three numerical methods are compared in simulations for lithotripters and high intensity focused ultrasound (HIFU) where the attenuation is small compared to the nonlinearity because much of the propagation occurs in water. The simulations are performed on grid sizes that are consistent with present-day computational resources but are not sufficiently refined for the current method to satisfy the Rankine-Hugoniot condition. It is shown that satisfying the Rankine-Hugoniot conditions has a significant impact on metrics relevant to lithotripsy (such as peak pressures) and HIFU (intensity). Because the Godunov and MUSCL schemes satisfy the Rankine-Hugoniot conditions on coarse grids, they are particularly advantageous for three-dimensional simulations.
Why significant variables aren't automatically good predictors.
Lo, Adeline; Chernoff, Herman; Zheng, Tian; Lo, Shaw-Hwa
2015-11-10
Thus far, genome-wide association studies (GWAS) have been disappointing in the inability of investigators to use the results of identified, statistically significant variants in complex diseases to make predictions useful for personalized medicine. Why are significant variables not leading to good prediction of outcomes? We point out that this problem is prevalent in simple as well as complex data, in the sciences as well as the social sciences. We offer a brief explanation and some statistical insights on why higher significance cannot automatically imply stronger predictivity and illustrate through simulations and a real breast cancer example. We also demonstrate that highly predictive variables do not necessarily appear as highly significant, thus evading the researcher using significance-based methods. We point out that what makes variables good for prediction versus significance depends on different properties of the underlying distributions. If prediction is the goal, we must lay aside significance as the only selection standard. We suggest that progress in prediction requires efforts toward a new research agenda of searching for a novel criterion to retrieve highly predictive variables rather than highly significant variables. We offer an alternative approach that was not designed for significance, the partition retention method, which was very effective predicting on a long-studied breast cancer data set, by reducing the classification error rate from 30% to 8%.
Bayesian evaluation of effect size after replicating an original study
van Aert, Robbie C. M.; van Assen, Marcel A. L. M.
2017-01-01
The vast majority of published results in the literature is statistically significant, which raises concerns about their reliability. The Reproducibility Project Psychology (RPP) and Experimental Economics Replication Project (EE-RP) both replicated a large number of published studies in psychology and economics. The original study and replication were statistically significant in 36.1% in RPP and 68.8% in EE-RP suggesting many null effects among the replicated studies. However, evidence in favor of the null hypothesis cannot be examined with null hypothesis significance testing. We developed a Bayesian meta-analysis method called snapshot hybrid that is easy to use and understand and quantifies the amount of evidence in favor of a zero, small, medium and large effect. The method computes posterior model probabilities for a zero, small, medium, and large effect and adjusts for publication bias by taking into account that the original study is statistically significant. We first analytically approximate the methods performance, and demonstrate the necessity to control for the original study’s significance to enable the accumulation of evidence for a true zero effect. Then we applied the method to the data of RPP and EE-RP, showing that the underlying effect sizes of the included studies in EE-RP are generally larger than in RPP, but that the sample sizes of especially the included studies in RPP are often too small to draw definite conclusions about the true effect size. We also illustrate how snapshot hybrid can be used to determine the required sample size of the replication akin to power analysis in null hypothesis significance testing and present an easy to use web application (https://rvanaert.shinyapps.io/snapshot/) and R code for applying the method. PMID:28388646
NASA Astrophysics Data System (ADS)
Anderton, Rupert N.; Cameron, Colin D.; Burnett, James G.; Güell, Jeff J.; Sanders-Reed, John N.
2014-06-01
This paper discusses the design of an improved passive millimeter wave imaging system intended to be used for base security in degraded visual environments. The discussion starts with the selection of the optimum frequency band. The trade-offs between requirements on detection, recognition and identification ranges and optical aperture are discussed with reference to the Johnson Criteria. It is shown that these requirements also affect image sampling, receiver numbers and noise temperature, frame rate, field of view, focusing requirements and mechanisms, and tolerance budgets. The effect of image quality degradation is evaluated and a single testable metric is derived that best describes the effects of degradation on meeting the requirements. The discussion is extended to tolerance budgeting constraints if significant degradation is to be avoided, including surface roughness, receiver position errors and scan conversion errors. Although the reflective twist-polarization imager design proposed is potentially relatively low cost and high performance, there is a significant problem with obscuration of the beam by the receiver array. Methods of modeling this accurately and thus designing for best performance are given.
An ERTS-1 investigation for Lake Ontario and its basin
NASA Technical Reports Server (NTRS)
Polcyn, F. C.; Falconer, A. (Principal Investigator); Wagner, T. W.; Rebel, D. L.
1975-01-01
The author has identified the following significant results. Methods of manual, semi-automatic, and automatic (computer) data processing were evaluated, as were the requirements for spatial physiographic and limnological information. The coupling of specially processed ERTS data with simulation models of the watershed precipitation/runoff process provides potential for water resources management. Optimal and full use of the data requires a mix of data processing and analysis techniques, including single band editing, two band ratios, and multiband combinations. A combination of maximum likelihood ratio and near-IR/red band ratio processing was found to be particularly useful.
Stereoselective synthesis from a process research perspective.
Hillier, Michael C; Reider, Paul J
2002-03-01
The process chemists' primary responsibility is to develop efficient and reproducible syntheses of pharmaceutically active compounds. This task is complicated when dealing with chiral molecules that often must be made as single isomers according to regulatory guidelines. The presence of any isomeric impurity in the final product, even in small amounts, is usually not acceptable. This requirement necessitates an exquisite understanding of the methods employed in the construction of chiral drugs. However, the chemistry available for this purpose is sometimes limited and often requires a significant amount of effort and creativity to be made both functional and consistent.
Disease control in hatchery fish
Fish, F.F.
1947-01-01
The method described herein has been extensively tested, both in the laboratory and at the producing hatchery, over a period of several years. Once familiarity with the details of application have been mastered, th8 reduction in effort required to treat fish is amazing. For example, two men have treated 20 large ponds containing several million fish, in one morning with no significant increase in mortality of the fish, whereas a crew of eight men required a full day to treat a single similar pond by hand dipping the fish with a subsequent loss approximating 50 percent of the stock.
Quantitative assessment of anthrax vaccine immunogenicity using the dried blood spot matrix.
Schiffer, Jarad M; Maniatis, Panagiotis; Garza, Ilana; Steward-Clark, Evelene; Korman, Lawrence T; Pittman, Phillip R; Mei, Joanne V; Quinn, Conrad P
2013-03-01
The collection, processing and transportation to a testing laboratory of large numbers of clinical samples during an emergency response situation present significant cost and logistical issues. Blood and serum are common clinical samples for diagnosis of disease. Serum preparation requires significant on-site equipment and facilities for immediate processing and cold storage, and significant costs for cold-chain transport to testing facilities. The dried blood spot (DBS) matrix offers an alternative to serum for rapid and efficient sample collection with fewer on-site equipment requirements and considerably lower storage and transport costs. We have developed and validated assay methods for using DBS in the quantitative anti-protective antigen IgG enzyme-linked immunosorbent assay (ELISA), one of the primary assays for assessing immunogenicity of anthrax vaccine and for confirmatory diagnosis of Bacillus anthracis infection in humans. We have also developed and validated high-throughput data analysis software to facilitate data handling for large clinical trials and emergency response. Published by Elsevier Ltd.
Iterated Gate Teleportation and Blind Quantum Computation.
Pérez-Delgado, Carlos A; Fitzsimons, Joseph F
2015-06-05
Blind quantum computation allows a user to delegate a computation to an untrusted server while keeping the computation hidden. A number of recent works have sought to establish bounds on the communication requirements necessary to implement blind computation, and a bound based on the no-programming theorem of Nielsen and Chuang has emerged as a natural limiting factor. Here we show that this constraint only holds in limited scenarios, and show how to overcome it using a novel method of iterated gate teleportations. This technique enables drastic reductions in the communication required for distributed quantum protocols, extending beyond the blind computation setting. Applied to blind quantum computation, this technique offers significant efficiency improvements, and in some scenarios offers an exponential reduction in communication requirements.
Consolidation of lunar regolith: Microwave versus direct solar heating
NASA Technical Reports Server (NTRS)
Kunitzer, J.; Strenski, D. G.; Yankee, S. J.; Pletka, B. J.
1991-01-01
The production of construction materials on the lunar surface will require an appropriate fabrication technique. Two processing methods considered as being suitable for producing dense, consolidated products such as bricks are direct solar heating and microwave heating. An analysis was performed to compare the two processes in terms of the amount of power and time required to fabricate bricks of various size. The regolith was considered to be a mare basalt with an overall density of 60 pct. of theoretical. Densification was assumed to take place by vitrification since this process requires moderate amounts of energy and time while still producing dense products. Microwave heating was shown to be significantly faster compared to solar furnace heating for rapid production of realistic-size bricks.
NASA Technical Reports Server (NTRS)
Whorton, M. S.; Eldridge, J. T.; Ferebee, R. C.; Lassiter, J. O.; Redmon, J. W., Jr.
1998-01-01
As a research facility for microgravity science, the International Space Station (ISS) will be used for numerous investigations such as protein crystal growth, combustion, and fluid mechanics experiments which require a quiescent acceleration environment across a broad spectrum of frequencies. These experiments are most sensitive to low-frequency accelerations and can tolerate much higher accelerations at higher frequency. However, the anticipated acceleration environment on ISS significantly exceeds the required acceleration level. The ubiquity and difficulty in characterization of the disturbance sources precludes source isolation, requiring vibration isolation to attenuate the anticipated disturbances to an acceptable level. This memorandum reports the results of research in active control methods for microgravity vibration isolation.
Beck, H J; Birch, G F
2013-06-01
Stormwater contaminant loading estimates using event mean concentration (EMC), rainfall/runoff relationship calculations and computer modelling (Model of Urban Stormwater Infrastructure Conceptualisation--MUSIC) demonstrated high variability in common methods of water quality assessment. Predictions of metal, nutrient and total suspended solid loadings for three highly urbanised catchments in Sydney estuary, Australia, varied greatly within and amongst methods tested. EMC and rainfall/runoff relationship calculations produced similar estimates (within 1 SD) in a statistically significant number of trials; however, considerable variability within estimates (∼50 and ∼25 % relative standard deviation, respectively) questions the reliability of these methods. Likewise, upper and lower default inputs in a commonly used loading model (MUSIC) produced an extensive range of loading estimates (3.8-8.3 times above and 2.6-4.1 times below typical default inputs, respectively). Default and calibrated MUSIC simulations produced loading estimates that agreed with EMC and rainfall/runoff calculations in some trials (4-10 from 18); however, they were not frequent enough to statistically infer that these methods produced the same results. Great variance within and amongst mean annual loads estimated by common methods of water quality assessment has important ramifications for water quality managers requiring accurate estimates of the quantities and nature of contaminants requiring treatment.
Dong, Tao; Yu, Liang; Gao, Difeng; Yu, Xiaochen; Miao, Chao; Zheng, Yubin; Lian, Jieni; Li, Tingting; Chen, Shulin
2015-12-01
Accurate determination of fatty acid contents is routinely required in microalgal and yeast biofuel studies. A method of rapid in situ fatty acid methyl ester (FAME) derivatization directly from wet fresh microalgal and yeast biomass was developed in this study. This method does not require prior solvent extraction or dehydration. FAMEs were prepared with a sequential alkaline hydrolysis (15 min at 85 °C) and acidic esterification (15 min at 85 °C) process. The resulting FAMEs were extracted into n-hexane and analyzed using gas chromatography. The effects of each processing parameter (temperature, reaction time, and water content) upon the lipids quantification in the alkaline hydrolysis step were evaluated with a full factorial design. This method could tolerate water content up to 20% (v/v) in total reaction volume, which equaled up to 1.2 mL of water in biomass slurry (with 0.05-25 mg of fatty acid). There were no significant differences in FAME quantification (p>0.05) between the standard AOAC 991.39 method and the proposed wet in situ FAME preparation method. This fatty acid quantification method is applicable to fresh wet biomass of a wide range of microalgae and yeast species.
SARP: a value-based approach to hospice admissions triage.
MacDonald, D
1995-01-01
As hospices become established and case referrals increase, many programs are faced with the necessity of instituting waiting lists. Prioritizing cases for order of admission requires a triage method that is rational, fair, and consistent. This article describes the SARP method of hospice admissions triage, which evaluates prospective cases according to seniority, acuity, risk, and political significance. SARP's essential features, operative assumptions, advantages, and limitations are discussed, as well as the core hospice values which underlie its use. The article concludes with a call for trial and evaluation of SARP in other hospice settings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W.; Romberger, Jeff
The HVAC Controls Evaluation Protocol is designed to address evaluation issues for direct digital controls/energy management systems/building automation systems (DDC/EMS/BAS) that are installed to control heating, ventilation, and air-conditioning (HVAC) equipment in commercial and institutional buildings. (This chapter refers to the DDC/EMS/BAS measure as HVAC controls.) This protocol may also be applicable to industrial facilities such as clean rooms and labs, which have either significant HVAC equipment or spaces requiring special environmental conditions.
Proof test methodology for composites
NASA Technical Reports Server (NTRS)
Wu, Edward M.; Bell, David K.
1992-01-01
The special requirements for proof test of composites are identified based on the underlying failure process of composites. Two proof test methods are developed to eliminate the inevitable weak fiber sites without also causing flaw clustering which weakens the post-proof-test composite. Significant reliability enhancement by these proof test methods has been experimentally demonstrated for composite strength and composite life in tension. This basic proof test methodology is relevant to the certification and acceptance of critical composite structures. It can also be applied to the manufacturing process development to achieve zero-reject for very large composite structures.
NASA Astrophysics Data System (ADS)
Karpenko, S. S.; Zybin, E. Yu; Kosyanchuk, V. V.
2018-02-01
In this paper we design a nonparametric method for failures detection and localization in the aircraft control system that uses the measurements of the control signals and the aircraft states only. It doesn’t require a priori information of the aircraft model parameters, training or statistical calculations, and is based on algebraic solvability conditions for the aircraft model identification problem. This makes it possible to significantly increase the efficiency of detection and localization problem solution by completely eliminating errors, associated with aircraft model uncertainties.
Robust stabilization of the Space Station in the presence of inertia matrix uncertainty
NASA Technical Reports Server (NTRS)
Wie, Bong; Liu, Qiang; Sunkel, John
1993-01-01
This paper presents a robust H-infinity full-state feedback control synthesis method for uncertain systems with D11 not equal to 0. The method is applied to the robust stabilization problem of the Space Station in the face of inertia matrix uncertainty. The control design objective is to find a robust controller that yields the largest stable hypercube in uncertain parameter space, while satisfying the nominal performance requirements. The significance of employing an uncertain plant model with D11 not equal 0 is demonstrated.
Cavity-Dumped Communication Laser Design
NASA Technical Reports Server (NTRS)
Roberts, W. T.
2003-01-01
Cavity-dumped lasers have significant advantages over more conventional Q-switched lasers for high-rate operation with pulse position modulation communications, including the ability to emit laser pulses at 1- to 10-megahertz rates, with pulse widths of 0.5 to 5 nanoseconds. A major advantage of cavity dumping is the potential to vary the cavity output percentage from pulse to pulse, maintaining the remainder of the energy in reserve for the next pulse. This article presents the results of a simplified cavity-dumped laser model, establishing the requirements for cavity efficiency and projecting the ultimate laser efficiency attainable in normal operation. In addition, a method of reducing or eliminating laser dead time is suggested that could significantly enhance communication capacity. The design of a laboratory demonstration laser is presented with estimates of required cavity efficiency and demonstration potential.
Gökşen, Damla; Atik Altınok, Yasemin; Ozen, Samim; Demir, Günay; Darcan, Sükran
2014-01-01
Medical nutritional therapy is important for glycemic control in children and adolescents with type 1 diabetes mellitus (T1DM). Carbohydrate (carb) counting, which is a more flexible nutritional method, has become popular in recent years. This study aimed to investigate the effects of carb counting on metabolic control, body measurements and serum lipid levels in children and adolescents with T1DM. T1DM patients aged 7-18 years and receiving flexible insulin therapy were divided into carb counting (n=52) and control (n=32) groups and were followed for 2 years in this randomized, controlled study. Demographic characteristics, body measurements, insulin requirements, hemoglobin A1c (HbA1c) and serum lipid levels at baseline and at follow-up were evaluated. There were no statistically significant differences between the groups in mean HbA1c values in the year preceding the study or in age, gender, duration of diabetes, puberty stage, total daily insulin dose, body mass index (BMI) standard deviation score (SDS) and serum lipid values. While there were no differences in BMI SDS, daily insulin requirement, total cholesterol, low-density lipoprotein and triglyceride values between the two groups (p>0.05) during the follow-up, annual mean HbA1c levels of the 2nd year were significantly lower in the carb counting group (p=0.010). The mean values of high-density lipoprotein were also significantly higher in the first and 2nd years in the carb counting group (p=0.02 and p=0.043, respectively). Carb counting may provide good metabolic control in children and adolescents with T1DM without causing any increase in weight or in insulin requirements.
Stuart, James Ian; Delport, Johan; Lannigan, Robert; Zahariadis, George
2014-01-01
BACKGROUND: Disease monitoring of viruses using real-time polymerase chain reaction (PCR) requires knowledge of the precision of the test to determine what constitutes a significant change. Calculation of quantitative PCR confidence limits requires bivariate statistical methods. OBJECTIVE: To develop a simple-to-use graphical user interface to determine the uncertainty of measurement (UOM) of BK virus, cytomegalovirus (CMV) and Epstein-Barr virus (EBV) real-time PCR assays. METHODS: Thirty positive clinical samples for each of the three viral assays were repeated once. A graphical user interface was developed using a spreadsheet (Excel, Microsoft Corporation, USA) to enable data entry and calculation of the UOM (according to Fieller’s theorem) and PCR efficiency. RESULTS: The confidence limits for the BK virus, CMV and EBV tests were ∼0.5 log, 0.5 log to 1.0 log, and 0.5 log to 1.0 log, respectively. The efficiencies of these assays, in the same order were 105%, 119% and 90%. The confidence limits remained stable over the linear range of all three tests. DISCUSSION: A >5 fold (0.7 log) and a >3-fold (0.5 log) change in viral load were significant for CMV and EBV when the results were ≤1000 copies/mL and >1000 copies/mL, respectively. A >3-fold (0.5 log) change in viral load was significant for BK virus over its entire linear range. PCR efficiency was ideal for BK virus and EBV but not CMV. Standardized international reference materials and shared reporting of UOM among laboratories are required for the development of treatment guidelines for BK virus, CMV and EBV in the context of changes in viral load. PMID:25285125
Gutiérrez-López, Rafael; Martínez-de la Puente, Josué; Gangoso, Laura; Soriguer, Ramón C; Figuerola, Jordi
2015-06-01
The barcoding of life initiative provides a universal molecular tool to distinguish animal species based on the amplification and sequencing of a fragment of the subunit 1 of the cytochrome oxidase (COI) gene. Obtaining good quality DNA for barcoding purposes is a limiting factor, especially in studies conducted on small-sized samples or those requiring the maintenance of the organism as a voucher. In this study, we compared the number of positive amplifications and the quality of the sequences obtained using DNA extraction methods that also differ in their economic costs and time requirements and we applied them for the genetic characterization of louse flies. Four DNA extraction methods were studied: chloroform/isoamyl alcohol, HotShot procedure, Qiagen DNeasy(®) Tissue and Blood Kit and DNA Kit Maxwell(®) 16LEV. All the louse flies were morphologically identified as Ornithophila gestroi and a single COI-based haplotype was identified. The number of positive amplifications did not differ significantly among DNA extraction procedures. However, the quality of the sequences was significantly lower for the case of the chloroform/isoamyl alcohol procedure with respect to the rest of methods tested here. These results may be useful for the genetic characterization of louse flies, leaving most of the remaining insect as a voucher. © 2015 The Society for Vector Ecology.
Ho, Bella; Ho, Eric
2012-01-01
Introduction: ISO 15189 was a new standard published in 2003 for accrediting medical laboratories. We believe that some requirements of the ISO 15189 standard are especially difficult to meet for majority of laboratories. The aim of this article was to present the frequency of nonconformities to requirements of the ISO 15189 accreditation standard, encountered during the assessments of medical laboratories in Hong Kong, during 2004 to 2009. Materials and methods: Nonconformities reported in assessments based on ISO 15189 were analyzed in two periods – from 2004 to 2006 and in 2009. They are categorized according to the ISO 15189 clause numbers. The performance of 27 laboratories initially assessed between 2004 and 2006 was compared to their performance in the second reassessment in 2009. Results: For management requirements, nonconformities were most frequently reported against quality management system, quality and technical records and document control; whereas for technical requirements, they were reported against examination procedures, equipment, and assuring quality of examination procedures. There was no major difference in types of common nonconformities reported in the two study periods. The total number of nonconformities reported in the second reassessment of 27 laboratories in 2009 was almost halved compared to their initial assessments. The number of significant nonconformities per laboratory significantly decreased (P = 0.023). Conclusion: Similar nonconformities were reported in the two study periods though the frequency encountered decreased. The significant decrease in number of significant nonconformities encountered in the same group of laboratories in the two periods substantiated that ISO15189 contributed to quality improvement of accredited laboratories. PMID:22838190
Change Point Detection in Correlation Networks
NASA Astrophysics Data System (ADS)
Barnett, Ian; Onnela, Jukka-Pekka
2016-01-01
Many systems of interacting elements can be conceptualized as networks, where network nodes represent the elements and network ties represent interactions between the elements. In systems where the underlying network evolves, it is useful to determine the points in time where the network structure changes significantly as these may correspond to functional change points. We propose a method for detecting change points in correlation networks that, unlike previous change point detection methods designed for time series data, requires minimal distributional assumptions. We investigate the difficulty of change point detection near the boundaries of the time series in correlation networks and study the power of our method and competing methods through simulation. We also show the generalizable nature of the method by applying it to stock price data as well as fMRI data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadid, John Nicolas; Fish, Jacob; Waisman, Haim
Two heuristic strategies intended to enhance the performance of the generalized global basis (GGB) method [H. Waisman, J. Fish, R.S. Tuminaro, J. Shadid, The Generalized Global Basis (GGB) method, International Journal for Numerical Methods in Engineering 61(8), 1243-1269] applied to nonlinear systems are presented. The standard GGB accelerates a multigrid scheme by an additional coarse grid correction that filters out slowly converging modes. This correction requires a potentially costly eigen calculation. This paper considers reusing previously computed eigenspace information. The GGB? scheme enriches the prolongation operator with new eigenvectors while the modified method (MGGB) selectively reuses the same prolongation. Bothmore » methods use the criteria of principal angles between subspaces spanned between the previous and current prolongation operators. Numerical examples clearly indicate significant time savings in particular for the MGGB scheme.« less
Lee, Sheng-Yu; Chen, Shiou-Lan; Chang, Yun-Hsuan; Chu, Chun-Hsien; Chen, Shih-Heng; Chen, Po See; Huang, San-Yuan; Tzeng, Nian-Sheng; Wang, Liang-Jen; Lee, I Hui; Wang, Tzu-Yun; Chen, Kao Chin; Yang, Yen Kuang; Hong, Jau-Shyong
2015-01-01
Background: Low-dose dextromethorphan (DM) might have anti-inflammatory and neurotrophic effects mechanistically remote from an NMDA receptor. In a randomized, double-blind, controlled 12 week study, we investigated whether add-on dextromethorphan reduced cytokine levels and benefitted opioid-dependent patients undergoing methadone maintenance therapy (MMT). Methods: Patients were randomly assigned to a group: DM60 (60mg/day dextromethorphan; n = 65), DM120 (120mg/day dextromethorphan; n = 65), or placebo (n = 66). Primary outcomes were the methadone dose required, plasma morphine level, and retention in treatment. Plasma tumor necrosis factor (TNF)-α, C-reactive protein, interleukin (IL)-6, IL-8, transforming growth factor–β1, and brain-derived neurotrophic factor (BDNF) levels were examined during weeks 0, 1, 4, 8, and 12. Multiple linear regressions with generalized estimating equation methods were used to examine the therapeutic effect. Results: After 12 weeks, the DM60 group had significantly longer treatment retention and lower plasma morphine levels than did the placebo group. Plasma TNF-α was significantly decreased in the DM60 group compared to the placebo group. However, changes in plasma cytokine levels, BDNF levels, and the methadone dose required in the three groups were not significantly different. Conclusions: We provide evidence—decreased concomitant heroin use—of low-dose add-on DM’s efficacy for treating opioid-dependent patients undergoing MMT. PMID:25716777
Efficient dual approach to distance metric learning.
Shen, Chunhua; Kim, Junae; Liu, Fayao; Wang, Lei; van den Hengel, Anton
2014-02-01
Distance metric learning is of fundamental interest in machine learning because the employed distance metric can significantly affect the performance of many learning methods. Quadratic Mahalanobis metric learning is a popular approach to the problem, but typically requires solving a semidefinite programming (SDP) problem, which is computationally expensive. The worst case complexity of solving an SDP problem involving a matrix variable of size D×D with O(D) linear constraints is about O(D(6.5)) using interior-point methods, where D is the dimension of the input data. Thus, the interior-point methods only practically solve problems exhibiting less than a few thousand variables. Because the number of variables is D(D+1)/2, this implies a limit upon the size of problem that can practically be solved around a few hundred dimensions. The complexity of the popular quadratic Mahalanobis metric learning approach thus limits the size of problem to which metric learning can be applied. Here, we propose a significantly more efficient and scalable approach to the metric learning problem based on the Lagrange dual formulation of the problem. The proposed formulation is much simpler to implement, and therefore allows much larger Mahalanobis metric learning problems to be solved. The time complexity of the proposed method is roughly O(D(3)), which is significantly lower than that of the SDP approach. Experiments on a variety of data sets demonstrate that the proposed method achieves an accuracy comparable with the state of the art, but is applicable to significantly larger problems. We also show that the proposed method can be applied to solve more general Frobenius norm regularized SDP problems approximately.
Behavioral Training as New Treatment for Adult Amblyopia: A Meta-Analysis and Systematic Review.
Tsirlin, Inna; Colpa, Linda; Goltz, Herbert C; Wong, Agnes M F
2015-06-01
New behavioral treatment methods, including dichoptic training, perceptual learning, and video gaming, have been proposed to improve visual function in adult amblyopia. Here, we conducted a meta-analysis of these methods to investigate the factors involved in amblyopia recovery and their clinical significance. Mean and individual participant data meta-analyses were performed on 24 studies using the new behavioral methods in adults. Studies were identified using PubMed, Google Scholar, and published reviews. The new methods yielded a mean improvement in visual acuity of 0.17 logMAR with 32% participants achieving gains ≥ 0.2 logMAR, and a mean improvement in stereo sensitivity of 0.01 arcsec-1 with 42% of participants improving ≥2 octaves. The most significant predictor of treatment outcome was visual acuity at the onset of treatment. Participants with more severe amblyopia improved more on visual acuity and less on stereo sensitivity than those with milder amblyopia. Better initial stereo sensitivity was a predictor of greater gains in stereo sensitivity following treatment. Treatment type, amblyopia type, age, and training duration did not have any significant influence on visual and stereo acuity outcomes. Our analyses showed that some participants may benefit from the new treatments; however, clinical trials are required to confirm these findings. Despite the diverse nature of the new behavioral methods, the lack of significant differences in visual and stereo sensitivity outcomes among them suggests that visual attention-a common element among the varied treatment methods-may play an important role in amblyopia recovery.
Jo, Ayami; Kanazawa, Manabu; Sato, Yusuke; Iwaki, Maiko; Akiba, Norihisa; Minakuchi, Shunsuke
2015-08-01
To compare the effect of conventional complete dentures (CD) fabricated using two different impression methods on patient-reported outcomes in a randomized controlled trial (RCT). A cross-over RCT was performed with edentulous patients, required maxillomandibular CDs. Mandibular CDs were fabricated using two different methods. The conventional method used a custom tray border moulded with impression compound and a silicone. The simplified used a stock tray and an alginate. Participants were randomly divided into two groups. The C-S group had the conventional method used first, followed by the simplified. The S-C group was in the reverse order. Adjustment was performed four times. A wash out period was set for 1 month. The primary outcome was general patient satisfaction, measured using visual analogue scales, and the secondary outcome was oral health-related quality of life, measured using the Japanese version of the Oral Health Impact Profile for edentulous (OHIP-EDENT-J) questionnaire scores. Twenty-four participants completed the trial. With regard to general patient satisfaction, the conventional method was significantly more acceptable than the simplified. No significant differences were observed between the two methods in the OHIP-EDENT-J scores. This study showed CDs fabricated with a conventional method were significantly more highly rated for general patient satisfaction than a simplified. CDs, fabricated with the conventional method that included a preliminary impression made using alginate in a stock tray and subsequently a final impression made using silicone in a border moulded custom tray resulted in higher general patient satisfaction. UMIN000009875. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Blakeslee, Cynthia Darden
2012-01-01
Significant changes in requirements for reading instruction and special education teacher preparation have occurred in recent years due to provisions found in the No Child Left Behind legislation of 2001 and the 2004 Individuals with Disabilities Education Improvement Act. This study examined the preparation for reading instruction that…
24 CFR 972.127 - Standards for determining whether a property is viable in the long term.
Code of Federal Regulations, 2010 CFR
2010-04-01
... must not exceed the Section 8 cost under the method contained in the Appendix to this part, even if the... housing in the community (typically family). (c) A greater income mix can be achieved. (1) Measures generally will be required to broaden the range of resident incomes over time to include a significant mix...
The Finnish multisource national forest inventory: small-area estimation and map production
Erkki Tomppo
2009-01-01
A driving force motivating development of the multisource national forest inventory (MS-NFI) in connection with the Finnish national forest inventory (NFI) was the desire to obtain forest resource information for smaller areas than is possible using field data only without significantly increasing the cost of the inventory. A basic requirement for the method was that...
Rodent repellent studies. IV. Preparation and properties of trinitrobenzene-aryl amine complexes
DeWitt, J.B.; Bellack, E.; Welch, J.F.
1953-01-01
Data are presented on methods of preparation, chemical arid physical characteristics, toxicity, and repellency to rodents of complexes of symmetrical trinitrohenzene with various aromatic amines: When applied in suitable carriers or incorporated in plastic .films, members of this series ofmaterials were shown to offer significant increases in time required by wild rodents to damage common packaging materials.
ERIC Educational Resources Information Center
Annesi, James J.; Porter, Kandice J.; Hill, Grant M.; Goldfine, Bernard D.
2017-01-01
Purpose: The aim of this research was to assess the association between university-based instructional physical activity (PA) courses and changes in overall PA levels and negative mood and their interrelations. The study also sought to determine the amount of change in PA required to significantly improve mood in course enrollees. Method:…
Durable and self-hydrating tungsten carbide-based composite polymer electrolyte membrane fuel cells.
Zheng, Weiqing; Wang, Liang; Deng, Fei; Giles, Stephen A; Prasad, Ajay K; Advani, Suresh G; Yan, Yushan; Vlachos, Dionisios G
2017-09-04
Proton conductivity of the polymer electrolyte membranes in fuel cells dictates their performance and requires sufficient water management. Here, we report a simple, scalable method to produce well-dispersed transition metal carbide nanoparticles. We demonstrate that these, when added as an additive to the proton exchange Nafion membrane, provide significant enhancement in power density and durability over 100 hours, surpassing both the baseline Nafion and platinum-containing recast Nafion membranes. Focused ion beam/scanning electron microscope tomography reveals the key membrane degradation mechanism. Density functional theory exposes that OH• and H• radicals adsorb more strongly from solution and reactions producing OH• are significantly more endergonic on tungsten carbide than on platinum. Consequently, tungsten carbide may be a promising catalyst in self-hydrating crossover gases while retarding desorption of and capturing free radicals formed at the cathode, resulting in enhanced membrane durability.The proton conductivity of polymer electrolyte membranes in fuel cells dictates their performance, but requires sufficient water management. Here, the authors report a simple method to produce well-dispersed transition metal carbide nanoparticles as additives to enhance the performance of Nafion membranes in fuel cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cree, Johnathan Vee; Delgado-Frias, Jose
Large scale wireless sensor networks have been proposed for applications ranging from anomaly detection in an environment to vehicle tracking. Many of these applications require the networks to be distributed across a large geographic area while supporting three to five year network lifetimes. In order to support these requirements large scale wireless sensor networks of duty-cycled devices need a method of efficient and effective autonomous configuration/maintenance. This method should gracefully handle the synchronization tasks duty-cycled networks. Further, an effective configuration solution needs to recognize that in-network data aggregation and analysis presents significant benefits to wireless sensor network and should configuremore » the network in a way such that said higher level functions benefit from the logically imposed structure. NOA, the proposed configuration and maintenance protocol, provides a multi-parent hierarchical logical structure for the network that reduces the synchronization workload. It also provides higher level functions with significant inherent benefits such as but not limited to: removing network divisions that are created by single-parent hierarchies, guarantees for when data will be compared in the hierarchy, and redundancies for communication as well as in-network data aggregation/analysis/storage.« less
Fast Video Encryption Using the H.264 Error Propagation Property for Smart Mobile Devices
Chung, Yongwha; Lee, Sungju; Jeon, Taewoong; Park, Daihee
2015-01-01
In transmitting video data securely over Video Sensor Networks (VSNs), since mobile handheld devices have limited resources in terms of processor clock speed and battery size, it is necessary to develop an efficient method to encrypt video data to meet the increasing demand for secure connections. Selective encryption methods can reduce the amount of computation needed while satisfying high-level security requirements. This is achieved by selecting an important part of the video data and encrypting it. In this paper, to ensure format compliance and security, we propose a special encryption method for H.264, which encrypts only the DC/ACs of I-macroblocks and the motion vectors of P-macroblocks. In particular, the proposed new selective encryption method exploits the error propagation property in an H.264 decoder and improves the collective performance by analyzing the tradeoff between the visual security level and the processing speed compared to typical selective encryption methods (i.e., I-frame, P-frame encryption, and combined I-/P-frame encryption). Experimental results show that the proposed method can significantly reduce the encryption workload without any significant degradation of visual security. PMID:25850068
Moore, A. C.; DeLucca, J. F.; Elliott, D. M.; Burris, D. L.
2016-01-01
This paper describes a new method, based on a recent analytical model (Hertzian biphasic theory (HBT)), to simultaneously quantify cartilage contact modulus, tension modulus, and permeability. Standard Hertzian creep measurements were performed on 13 osteochondral samples from three mature bovine stifles. Each creep dataset was fit for material properties using HBT. A subset of the dataset (N = 4) was also fit using Oyen's method and FEBio, an open-source finite element package designed for soft tissue mechanics. The HBT method demonstrated statistically significant sensitivity to differences between cartilage from the tibial plateau and cartilage from the femoral condyle. Based on the four samples used for comparison, no statistically significant differences were detected between properties from the HBT and FEBio methods. While the finite element method is considered the gold standard for analyzing this type of contact, the expertise and time required to setup and solve can be prohibitive, especially for large datasets. The HBT method agreed quantitatively with FEBio but also offers ease of use by nonexperts, rapid solutions, and exceptional fit quality (R2 = 0.999 ± 0.001, N = 13). PMID:27536012
An Engineering Method of Civil Jet Requirements Validation Based on Requirements Project Principle
NASA Astrophysics Data System (ADS)
Wang, Yue; Gao, Dan; Mao, Xuming
2018-03-01
A method of requirements validation is developed and defined to meet the needs of civil jet requirements validation in product development. Based on requirements project principle, this method will not affect the conventional design elements, and can effectively connect the requirements with design. It realizes the modern civil jet development concept, which is “requirement is the origin, design is the basis”. So far, the method has been successfully applied in civil jet aircraft development in China. Taking takeoff field length as an example, the validation process and the validation method of the requirements are detailed introduced in the study, with the hope of providing the experiences to other civil jet product design.
Farmer, William H.; Archfield, Stacey A.; Over, Thomas M.; Hay, Lauren E.; LaFontaine, Jacob H.; Kiang, Julie E.
2015-01-01
Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water. Streamgages cannot be installed at every location where streamflow information is needed. As part of its National Water Census, the U.S. Geological Survey is planning to provide streamflow predictions for ungaged locations. In order to predict streamflow at a useful spatial and temporal resolution throughout the Nation, efficient methods need to be selected. This report examines several methods used for streamflow prediction in ungaged basins to determine the best methods for regional and national implementation. A pilot area in the southeastern United States was selected to apply 19 different streamflow prediction methods and evaluate each method by a wide set of performance metrics. Through these comparisons, two methods emerged as the most generally accurate streamflow prediction methods: the nearest-neighbor implementations of nonlinear spatial interpolation using flow duration curves (NN-QPPQ) and standardizing logarithms of streamflow by monthly means and standard deviations (NN-SMS12L). It was nearly impossible to distinguish between these two methods in terms of performance. Furthermore, neither of these methods requires significantly more parameterization in order to be applied: NN-SMS12L requires 24 regional regressions—12 for monthly means and 12 for monthly standard deviations. NN-QPPQ, in the application described in this study, required 27 regressions of particular quantiles along the flow duration curve. Despite this finding, the results suggest that an optimal streamflow prediction method depends on the intended application. Some methods are stronger overall, while some methods may be better at predicting particular statistics. The methods of analysis presented here reflect a possible framework for continued analysis and comprehensive multiple comparisons of methods of prediction in ungaged basins (PUB). Additional metrics of comparison can easily be incorporated into this type of analysis. By considering such a multifaceted approach, the top-performing models can easily be identified and considered for further research. The top-performing models can then provide a basis for future applications and explorations by scientists, engineers, managers, and practitioners to suit their own needs.
Volumetric calibration of a plenoptic camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert
Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less
Volumetric calibration of a plenoptic camera
Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert; ...
2018-02-01
Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less
NASA Technical Reports Server (NTRS)
Roth, Don J.; Hendricks, J. Lynne; Whalen, Mike F.; Bodis, James R.; Martin, Katherine
1996-01-01
This article describes the commercial implementation of ultrasonic velocity imaging methods developed and refined at NASA Lewis Research Center on the Sonix c-scan inspection system. Two velocity imaging methods were implemented: thickness-based and non-thickness-based reflector plate methods. The article demonstrates capabilities of the commercial implementation and gives the detailed operating procedures required for Sonix customers to achieve optimum velocity imaging results. This commercial implementation of velocity imaging provides a 100x speed increase in scanning and processing over the lab-based methods developed at LeRC. The significance of this cooperative effort is that the aerospace and other materials development-intensive industries which use extensive ultrasonic inspection for process control and failure analysis will now have an alternative, highly accurate imaging method commercially available.
Design method of large-diameter rock-socketed pile with steel casing
NASA Astrophysics Data System (ADS)
Liu, Ming-wei; Fang, Fang; Liang, Yue
2018-02-01
There is a lack of the design and calculation method of large-diameter rock-socketed pile with steel casing. Combined with the “twelfth five-year plan” of the National Science & Technology Pillar Program of China about “Key technologies on the ports and wharfs constructions of the mountain canalization channels”, this paper put forward the structured design requirements of concrete, steel bar distribution and steel casing, and a checking calculation method of the bearing capacity of the normal section of the pile and the maximum crack width at the bottom of the steel casing. The design method will have some degree of guiding significance for the design of large-diameter rock-socketed pile with steel casing.
NASA Technical Reports Server (NTRS)
Sitterley, T. E.; Zaitzeff, L. P.; Berge, W. A.
1972-01-01
Flight control and procedural task skill degradation, and the effectiveness of retraining methods were evaluated for a simulated space vehicle approach and landing under instrument and visual flight conditions. Fifteen experienced pilots were trained and then tested after 4 months either without the benefits of practice or with static rehearsal, dynamic rehearsal or with dynamic warmup practice. Performance on both the flight control and procedure tasks degraded significantly after 4 months. The rehearsal methods effectively countered procedure task skill degradation, while dynamic rehearsal or a combination of static rehearsal and dynamic warmup practice was required for the flight control tasks. The quality of the retraining methods appeared to be primarily dependent on the efficiency of visual cue reinforcement.
NASA Astrophysics Data System (ADS)
Zeng, Lang; He, Yu; Povolotskyi, Michael; Liu, XiaoYan; Klimeck, Gerhard; Kubis, Tillmann
2013-06-01
In this work, the low rank approximation concept is extended to the non-equilibrium Green's function (NEGF) method to achieve a very efficient approximated algorithm for coherent and incoherent electron transport. This new method is applied to inelastic transport in various semiconductor nanodevices. Detailed benchmarks with exact NEGF solutions show (1) a very good agreement between approximated and exact NEGF results, (2) a significant reduction of the required memory, and (3) a large reduction of the computational time (a factor of speed up as high as 150 times is observed). A non-recursive solution of the inelastic NEGF transport equations of a 1000 nm long resistor on standard hardware illustrates nicely the capability of this new method.
NASA Technical Reports Server (NTRS)
Pappa, Richard S. (Technical Monitor); Black, Jonathan T.
2003-01-01
This report discusses the development and application of metrology methods called photogrammetry and videogrammetry that make accurate measurements from photographs. These methods have been adapted for the static and dynamic characterization of gossamer structures, as four specific solar sail applications demonstrate. The applications prove that high-resolution, full-field, non-contact static measurements of solar sails using dot projection photogrammetry are possible as well as full-field, non-contact, dynamic characterization using dot projection videogrammetry. The accuracy of the measurement of the resonant frequencies and operating deflection shapes that were extracted surpassed expectations. While other non-contact measurement methods exist, they are not full-field and require significantly more time to take data.
Why are Formal Methods Not Used More Widely?
NASA Technical Reports Server (NTRS)
Knight, John C.; DeJong, Colleen L.; Gibble, Matthew S.; Nakano, Luis G.
1997-01-01
Despite extensive development over many years and significant demonstrated benefits, formal methods remain poorly accepted by industrial practitioners. Many reasons have been suggested for this situation such as a claim that they extent the development cycle, that they require difficult mathematics, that inadequate tools exist, and that they are incompatible with other software packages. There is little empirical evidence that any of these reasons is valid. The research presented here addresses the question of why formal methods are not used more widely. The approach used was to develop a formal specification for a safety-critical application using several specification notations and assess the results in a comprehensive evaluation framework. The results of the experiment suggests that there remain many impediments to the routine use of formal methods.
Infrared coagulation: a new treatment for hemorrhoids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leicester, R.J.; Nicholls, R.J.; Mann, C.V.
Many methods, which have effectively reduced the number of patients requiring hospital admission, have been described for the outpatient treatment of hemorrhoids. However, complications have been reported, and the methods are often associated with unpleasant side effects. In 1977 Neiger et al. described a new method that used infrared coagulation, which produced minimal side effects. The authors have conducted a prospective, randomized trial to evaluate infrared coagulation compared with more traditional methods of treatment. The authors' results show that it may be more effective than injection sclerotherapy in treating non-prolapsing hemorrhoids and that it compares favorably with rubber band ligationmore » in most prolapsing hemorrhoids. No complications occurred, and significantly fewer patients experienced pain after infrared coagulation (P . less than 0.001).« less
NASA Astrophysics Data System (ADS)
Liao, S.; Chen, L.; Li, J.; Xiong, W.; Wu, Q.
2015-07-01
Existing spatiotemporal database supports spatiotemporal aggregation query over massive moving objects datasets. Due to the large amounts of data and single-thread processing method, the query speed cannot meet the application requirements. On the other hand, the query efficiency is more sensitive to spatial variation then temporal variation. In this paper, we proposed a spatiotemporal aggregation query method using multi-thread parallel technique based on regional divison and implemented it on the server. Concretely, we divided the spatiotemporal domain into several spatiotemporal cubes, computed spatiotemporal aggregation on all cubes using the technique of multi-thread parallel processing, and then integrated the query results. By testing and analyzing on the real datasets, this method has improved the query speed significantly.
Barnette, Daniel W.
2002-01-01
The present invention provides a method of grid generation that uses the geometry of the problem space and the governing relations to generate a grid. The method can generate a grid with minimized discretization errors, and with minimal user interaction. The method of the present invention comprises assigning grid cell locations so that, when the governing relations are discretized using the grid, at least some of the discretization errors are substantially zero. Conventional grid generation is driven by the problem space geometry; grid generation according to the present invention is driven by problem space geometry and by governing relations. The present invention accordingly can provide two significant benefits: more efficient and accurate modeling since discretization errors are minimized, and reduced cost grid generation since less human interaction is required.
Bushon, R.N.; Brady, A.M.; Likirdopulos, C.A.; Cireddu, J.V.
2009-01-01
Aims: The aim of this study was to examine a rapid method for detecting Escherichia coli and enterococci in recreational water. Methods and Results: Water samples were assayed for E. coli and enterococci by traditional and immunomagnetic separation/adenosine triphosphate (IMS/ATP) methods. Three sample treatments were evaluated for the IMS/ATP method: double filtration, single filtration, and direct analysis. Pearson's correlation analysis showed strong, significant, linear relations between IMS/ATP and traditional methods for all sample treatments; strongest linear correlations were with the direct analysis (r = 0.62 and 0.77 for E. coli and enterococci, respectively). Additionally, simple linear regression was used to estimate bacteria concentrations as a function of IMS/ATP results. The correct classification of water-quality criteria was 67% for E. coli and 80% for enterococci. Conclusions: The IMS/ATP method is a viable alternative to traditional methods for faecal-indicator bacteria. Significance and Impact of the Study: The IMS/ATP method addresses critical public health needs for the rapid detection of faecal-indicator contamination and has potential for satisfying US legislative mandates requiring methods to detect bathing water contamination in 2 h or less. Moreover, IMS/ATP equipment is considerably less costly and more portable than that for molecular methods, making the method suitable for field applications. ?? 2009 The Authors.
Riesgo, Ana; Pérez-Porro, Alicia R; Carmona, Susana; Leys, Sally P; Giribet, Gonzalo
2012-03-01
Transcriptome sequencing with next-generation sequencing technologies has the potential for addressing many long-standing questions about the biology of sponges. Transcriptome sequence quality depends on good cDNA libraries, which requires high-quality mRNA. Standard protocols for preserving and isolating mRNA often require optimization for unusual tissue types. Our aim was assessing the efficiency of two preservation modes, (i) flash freezing with liquid nitrogen (LN₂) and (ii) immersion in RNAlater, for the recovery of high-quality mRNA from sponge tissues. We also tested whether the long-term storage of samples at -80 °C affects the quantity and quality of mRNA. We extracted mRNA from nine sponge species and analysed the quantity and quality (A260/230 and A260/280 ratios) of mRNA according to preservation method, storage time, and taxonomy. The quantity and quality of mRNA depended significantly on the preservation method used (LN₂) outperforming RNAlater), the sponge species, and the interaction between them. When the preservation was analysed in combination with either storage time or species, the quantity and A260/230 ratio were both significantly higher for LN₂-preserved samples. Interestingly, individual comparisons for each preservation method over time indicated that both methods performed equally efficiently during the first month, but RNAlater lost efficiency in storage times longer than 2 months compared with flash-frozen samples. In summary, we find that for long-term preservation of samples, flash freezing is the preferred method. If LN₂ is not available, RNAlater can be used, but mRNA extraction during the first month of storage is advised. © 2011 Blackwell Publishing Ltd.
Calculation of parameters of technological equipment for deep-sea mining
NASA Astrophysics Data System (ADS)
Yungmeister, D. A.; Ivanov, S. E.; Isaev, A. I.
2018-03-01
The actual problem of extracting minerals from the bottom of the world ocean is considered. On the ocean floor, three types of minerals are of interest: iron-manganese concretions (IMC), cobalt-manganese crusts (CMC) and sulphides. The analysis of known designs of machines and complexes for the extraction of IMC is performed. These machines are based on the principle of excavating the bottom surface; however such methods do not always correspond to “gentle” methods of mining. The ecological purity of such mining methods does not meet the necessary requirements. Such machines require the transmission of high electric power through the water column, which in some cases is a significant challenge. The authors analyzed the options of transportation of the extracted mineral from the bottom. The paper describes the design of machines that collect IMC by the method of vacuum suction. In this method, the gripping plates or drums are provided with cavities in which a vacuum is created and individual IMC are attracted to the devices by a pressure drop. The work of such machines can be called “gentle” processing technology of the bottom areas. Their environmental impact is significantly lower than mechanical devices that carry out the raking of IMC. The parameters of the device for lifting the IMC collected on the bottom are calculated. With the use of Kevlar ropes of serial production up to 0.06 meters in diameter, with a cycle time of up to 2 hours and a lifting speed of up to 3 meters per second, a productivity of about 400,000 tons per year can be realized for IMC. The development of machines based on the calculated parameters and approbation of their designs will create a unique complex for the extraction of minerals at oceanic deposits.
Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions
NASA Astrophysics Data System (ADS)
Chen, Nan; Majda, Andrew J.
2018-02-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
Robotics and Automation for Flight Deck Aircraft Servicing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chesser, J.B.; Draper, J.V.; Pin, F.G.
1999-03-01
One of the missions of the Future Aircraft Carriers Program is to investigate methods that would improve aircraft turnaround servicing activities on carrier decks. The major objectives and criteria for evaluating alternative aircraft servicing methods are to reduce workload requirements, turnaround times (TAT), and life-cycle costs (LCC). Technologies in the field of Robotics and Automation (R and A) have the potential to significantly contribute to these objectives. The objective of this study was to investigate aircraft servicing functions on carrier decks which would offer the potentially most significant payoff if improved by various R and A technologies. Improvement in thismore » case means reducing workload, time and LCC. This objective was accomplished using a ''bottom-up'' formalized approach as described in the following.« less
Qian, Siyu; Yu, Ping; Hailey, David M; Wang, Ning
2016-04-01
To examine nursing time spent on administration of medications in a residential aged care (RAC) home, and to determine factors that influence the time to medicate a resident. Information on nursing time spent on medication administration is useful for planning and implementation of nursing resources. Nurses were observed over 12 morning medication rounds using a time-motion observational method and field notes, at two high-care units in an Australian RAC home. Nurses spent between 2.5 and 4.5 hours in a medication round. Administration of medication averaged 200 seconds per resident. Four factors had significant impact on medication time: number of types of medication, number of tablets taken by a resident, methods used by a nurse to prepare tablets and methods to provide tablets. Administration of medication consumed a substantial, though variable amount of time in the RAC home. Nursing managers need to consider the factors that influenced the nursing time required for the administration of medication in their estimation of nursing workload and required resources. To ensure safe medication administration for older people, managers should regularly assess the changes in the factors influencing nursing time on the administration of medication when estimating nursing workload and required resources. © 2015 John Wiley & Sons Ltd.
2013-01-01
Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. PMID:23758852
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chajon, Enrique; Dumas, Isabelle; Touleimat, Mahmoud B.Sc.
2007-11-01
Purpose: The purpose of this study was to evaluate the inverse planning simulated annealing (IPSA) software for the optimization of dose distribution in patients with cervix carcinoma treated with MRI-based pulsed-dose rate intracavitary brachytherapy. Methods and Materials: Thirty patients treated with a technique using a customized vaginal mold were selected. Dose-volume parameters obtained using the IPSA method were compared with the classic manual optimization method (MOM). Target volumes and organs at risk were delineated according to the Gynecological Brachytherapy Group/European Society for Therapeutic Radiology and Oncology recommendations. Because the pulsed dose rate program was based on clinical experience with lowmore » dose rate, dwell time values were required to be as homogeneous as possible. To achieve this goal, different modifications of the IPSA program were applied. Results: The first dose distribution calculated by the IPSA algorithm proposed a heterogeneous distribution of dwell time positions. The mean D90, D100, and V100 calculated with both methods did not differ significantly when the constraints were applied. For the bladder, doses calculated at the ICRU reference point derived from the MOM differed significantly from the doses calculated by the IPSA method (mean, 58.4 vs. 55 Gy respectively; p = 0.0001). For the rectum, the doses calculated at the ICRU reference point were also significantly lower with the IPSA method. Conclusions: The inverse planning method provided fast and automatic solutions for the optimization of dose distribution. However, the straightforward use of IPSA generated significant heterogeneity in dwell time values. Caution is therefore recommended in the use of inverse optimization tools with clinical relevance study of new dosimetric rules.« less
NASA Technical Reports Server (NTRS)
Vos, Gordon A.; Fink, Patrick; Ngo, Phong H.; Morency, Richard; Simon, Cory; Williams, Robert E.; Perez, Lance C.
2015-01-01
Space Human Factors and Habitability (SHFH) Element within the Human Research Program (HRP), in collaboration with the Behavioral Health and Performance (BHP) Element, is conducting research regarding Net Habitable Volume (NHV), the internal volume within a spacecraft or habitat that is available to crew for required activities, as well as layout and accommodations within that volume. NASA is looking for innovative methods to unobtrusively collect NHV data without impacting crew time. Data required includes metrics such as location and orientation of crew, volume used to complete tasks, internal translation paths, flow of work, and task completion times. In less constrained environments methods for collecting such data exist yet many are obtrusive and require significant post-processing. Example technologies used in terrestrial settings include infrared (IR) retro-reflective marker based motion capture, GPS sensor tracking, inertial tracking, and multiple camera filmography. However due to constraints of space operations many such methods are infeasible, such as inertial tracking systems which typically rely upon a gravity vector to normalize sensor readings, and traditional IR systems which are large and require extensive calibration. However multiple technologies have not yet been applied to space operations for these explicit purposes. Two of these include 3-Dimensional Radio Frequency Identification Real-Time Localization Systems (3D RFID-RTLS) and depth imaging systems which allow for 3D motion capture and volumetric scanning (such as those using IR-depth cameras like the Microsoft Kinect or Light Detection and Ranging / Light-Radar systems, referred to as LIDAR).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winkler, Jon; Booten, Chuck
Residential building codes and voluntary labeling programs are continually increasing the energy efficiency requirements of residential buildings. Improving a building's thermal enclosure and installing energy-efficient appliances and lighting can result in significant reductions in sensible cooling loads leading to smaller air conditioners and shorter cooling seasons. However due to fresh air ventilation requirements and internal gains, latent cooling loads are not reduced by the same proportion. Thus, it's becoming more challenging for conventional cooling equipment to control indoor humidity at part-load cooling conditions and using conventional cooling equipment in a non-conventional building poses the potential risk of high indoor humidity.more » The objective of this project was to investigate the impact the chosen design condition has on the calculated part-load cooling moisture load, and compare calculated moisture loads and the required dehumidification capacity to whole-building simulations. Procedures for sizing whole-house supplemental dehumidification equipment have yet to be formalized; however minor modifications to current Air-Conditioner Contractors of America (ACCA) Manual J load calculation procedures are appropriate for calculating residential part-load cooling moisture loads. Though ASHRAE 1% DP design conditions are commonly used to determine the dehumidification requirements for commercial buildings, an appropriate DP design condition for residential buildings has not been investigated. Two methods for sizing supplemental dehumidification equipment were developed and tested. The first method closely followed Manual J cooling load calculations; whereas the second method made more conservative assumptions impacting both sensible and latent loads.« less
Rapid extraction of image texture by co-occurrence using a hybrid data structure
NASA Astrophysics Data System (ADS)
Clausi, David A.; Zhao, Yongping
2002-07-01
Calculation of co-occurrence probabilities is a popular method for determining texture features within remotely sensed digital imagery. Typically, the co-occurrence features are calculated by using a grey level co-occurrence matrix (GLCM) to store the co-occurring probabilities. Statistics are applied to the probabilities in the GLCM to generate the texture features. This method is computationally intensive since the matrix is usually sparse leading to many unnecessary calculations involving zero probabilities when applying the statistics. An improvement on the GLCM method is to utilize a grey level co-occurrence linked list (GLCLL) to store only the non-zero co-occurring probabilities. The GLCLL suffers since, to achieve preferred computational speeds, the list should be sorted. An improvement on the GLCLL is to utilize a grey level co-occurrence hybrid structure (GLCHS) based on an integrated hash table and linked list approach. Texture features obtained using this technique are identical to those obtained using the GLCM and GLCLL. The GLCHS method is implemented using the C language in a Unix environment. Based on a Brodatz test image, the GLCHS method is demonstrated to be a superior technique when compared across various window sizes and grey level quantizations. The GLCHS method required, on average, 33.4% ( σ=3.08%) of the computational time required by the GLCLL. Significant computational gains are made using the GLCHS method.
NASA Technical Reports Server (NTRS)
LaBonte, Barry J.
2004-01-01
A small amount of work has been done on this project; the strategy to be adopted has been better defined, though no experimental work has been started. 1) Wavefront error signals: The best choice appears use a lenslet array at a pupil image to produce defocused image pairs for each subaperture. Then use the method proposed by Molodij et al. to produce subaperture curvature signals. Basically, this method samples a moderate number of locations in the image where the value of the image Laplacian is high, then taking the curvature signal from the difference of the Laplacians of the extrafocal images at those locations. The tip-tilt error is obtained from the temporal dependence of the first spatial derivatives of an in-focus image, at selected locations where these derivatives are significant. The wavefront tilt can be obtained from the full-aperture image. 2) Extrafocal image generation: The important aspect here is to generate symmetrically defocused images, with dynamically adjustable defocus. The adjustment is needed because larger defocus is required before the feedback loop is closed, and at times when the seeing is worse. It may be that the usual membrane mirror is the best choice, though other options should be explored. 3) Detector: Since the proposed sensor is to work on solar granulation, rather than a point source, an array detector for each subaperture is required. A fast CMOS camera such as that developed by the National Solar Observatory would be a satisfactory choice. 4) Processing: Processing requirements have not been defined in detail, though significantly fewer operations per cycle are required than for a correlation tracker.
Behn, Andrew; Zimmerman, Paul M; Bell, Alexis T; Head-Gordon, Martin
2011-12-13
The growing string method is a powerful tool in the systematic study of chemical reactions with theoretical methods which allows for the rapid identification of transition states connecting known reactant and product structures. However, the efficiency of this method is heavily influenced by the choice of interpolation scheme when adding new nodes to the string during optimization. In particular, the use of Cartesian coordinates with cubic spline interpolation often produces guess structures which are far from the final reaction path and require many optimization steps (and thus many energy and gradient calculations) to yield a reasonable final structure. In this paper, we present a new method for interpolating and reparameterizing nodes within the growing string method using the linear synchronous transit method of Halgren and Lipscomb. When applied to the alanine dipeptide rearrangement and a simplified cationic alkyl ring condensation reaction, a significant speedup in terms of computational cost is achieved (30-50%).
Optimal Tikhonov regularization for DEER spectroscopy
NASA Astrophysics Data System (ADS)
Edwards, Thomas H.; Stoll, Stefan
2018-03-01
Tikhonov regularization is the most commonly used method for extracting distance distributions from experimental double electron-electron resonance (DEER) spectroscopy data. This method requires the selection of a regularization parameter, α , and a regularization operator, L. We analyze the performance of a large set of α selection methods and several regularization operators, using a test set of over half a million synthetic noisy DEER traces. These are generated from distance distributions obtained from in silico double labeling of a protein crystal structure of T4 lysozyme with the spin label MTSSL. We compare the methods and operators based on their ability to recover the model distance distributions from the noisy time traces. The results indicate that several α selection methods perform quite well, among them the Akaike information criterion and the generalized cross validation method with either the first- or second-derivative operator. They perform significantly better than currently utilized L-curve methods.
[Review of research design and statistical methods in Chinese Journal of Cardiology].
Zhang, Li-jun; Yu, Jin-ming
2009-07-01
To evaluate the research design and the use of statistical methods in Chinese Journal of Cardiology. Peer through the research design and statistical methods in all of the original papers in Chinese Journal of Cardiology from December 2007 to November 2008. The most frequently used research designs are cross-sectional design (34%), prospective design (21%) and experimental design (25%). In all of the articles, 49 (25%) use wrong statistical methods, 29 (15%) lack some sort of statistic analysis, 23 (12%) have inconsistencies in description of methods. There are significant differences between different statistical methods (P < 0.001). The correction rates of multifactor analysis were low and repeated measurement datas were not used repeated measurement analysis. Many problems exist in Chinese Journal of Cardiology. Better research design and correct use of statistical methods are still needed. More strict review by statistician and epidemiologist is also required to improve the literature qualities.
NASA Astrophysics Data System (ADS)
Salinas, P.; Pavlidis, D.; Xie, Z.; Osman, H.; Pain, C. C.; Jackson, M. D.
2018-01-01
We present a new, high-order, control-volume-finite-element (CVFE) method for multiphase porous media flow with discontinuous 1st-order representation for pressure and discontinuous 2nd-order representation for velocity. The method has been implemented using unstructured tetrahedral meshes to discretize space. The method locally and globally conserves mass. However, unlike conventional CVFE formulations, the method presented here does not require the use of control volumes (CVs) that span the boundaries between domains with differing material properties. We demonstrate that the approach accurately preserves discontinuous saturation changes caused by permeability variations across such boundaries, allowing efficient simulation of flow in highly heterogeneous models. Moreover, accurate solutions are obtained at significantly lower computational cost than using conventional CVFE methods. We resolve a long-standing problem associated with the use of classical CVFE methods to model flow in highly heterogeneous porous media.
The least-squares finite element method for low-mach-number compressible viscous flows
NASA Technical Reports Server (NTRS)
Yu, Sheng-Tao
1994-01-01
The present paper reports the development of the Least-Squares Finite Element Method (LSFEM) for simulating compressible viscous flows at low Mach numbers in which the incompressible flows pose as an extreme. Conventional approach requires special treatments for low-speed flows calculations: finite difference and finite volume methods are based on the use of the staggered grid or the preconditioning technique; and, finite element methods rely on the mixed method and the operator-splitting method. In this paper, however, we show that such difficulty does not exist for the LSFEM and no special treatment is needed. The LSFEM always leads to a symmetric, positive-definite matrix through which the compressible flow equations can be effectively solved. Two numerical examples are included to demonstrate the method: first, driven cavity flows at various Reynolds numbers; and, buoyancy-driven flows with significant density variation. Both examples are calculated by using full compressible flow equations.
Umari, A.M.; Gorelick, S.M.
1986-01-01
It is possible to obtain analytic solutions to the groundwater flow and solute transport equations if space variables are discretized but time is left continuous. From these solutions, hydraulic head and concentration fields for any future time can be obtained without ' marching ' through intermediate time steps. This analytical approach involves matrix exponentiation and is referred to as the Matrix Exponential Time Advancement (META) method. Two algorithms are presented for the META method, one for symmetric and the other for non-symmetric exponent matrices. A numerical accuracy indicator, referred to as the matrix condition number, was defined and used to determine the maximum number of significant figures that may be lost in the META method computations. The relative computational and storage requirements of the META method with respect to the time marching method increase with the number of nodes in the discretized problem. The potential greater accuracy of the META method and the associated greater reliability through use of the matrix condition number have to be weighed against this increased relative computational and storage requirements of this approach as the number of nodes becomes large. For a particular number of nodes, the META method may be computationally more efficient than the time-marching method, depending on the size of time steps used in the latter. A numerical example illustrates application of the META method to a sample ground-water-flow problem. (Author 's abstract)
Anesthetic Requirement is Increased in Redheads
Liem, Edwin B.; Lin, Chun–Ming; Suleman, Mohammad–Irfan; Doufas, Anthony G.; Gregg, Ronald G.; Veauthier, Jacqueline M.; Loyd, Gary
2005-01-01
Background: Age and body temperature alter inhalational anesthetic requirement; however, no human genotype is associated with inhalational anesthetic requirement. There is an anecdotal impression that anesthetic requirement is increased in redheads. Furthermore, red hair results from distinct mutations of the melanocortin-1 receptor. We thus tested the hypothesis that the requirement for the volatile anesthetic desflurane is greater in natural redhead than in dark-haired women. Methods: We studied healthy women with bright red (n=10) or dark (n=10) hair. Blood was sampled for subsequent analyses of melanocortin-1 receptor alleles. Anesthesia was induced with sevoflurane and maintained with desflurane randomly set at an end-tidal concentration between 5.5 and 7.5%. After an equilibration period, a noxious electrical stimulation (100 Hz, 70 mA) was transmitted through bilateral intradermal needles. If the volunteer moved in response to stimulation, desflurane was increased by 0.5%; otherwise it was decreased by 0.5%. This was continued until volunteers “crossed-over” from movement to non-movement (or vice versa) four times. Individual logistic regression curves were used to determine desflurane requirement (P50). Desflurane requirements in the two groups were compared using Mann-Whitney nonparametric two-sample test; P < 0.05 was considered statistically significant. Results: The desflurane requirement in redheads (6.2 volume-percent [95% CI, 5.9 - 6.5]) was significantly greater than in dark-haired women (5.2 volume-percent [4.9 – 5.5], P = 0.0004). Nine of 10 redheads were either homozygous or compound heterozygotes for mutations on the melanocortin-1 receptor gene. Conclusions: Red hair appears to be a distinct phenotype linked to anesthetic requirement in humans that can also be traced to a specific genotype. PMID:15277908
An evolving-requirements technology assessment process for advanced propulsion concepts
NASA Astrophysics Data System (ADS)
McClure, Erin Kathleen
The following dissertation investigates the development of a methodology suitable for the evaluation of advanced propulsion concepts. At early stages of development, both the future performance of these concepts and their requirements are highly uncertain, making it difficult to forecast their future value. Developing advanced propulsion concepts requires a huge investment of resources. The methodology was developed to enhance the decision-makers understanding of the concepts, so that they could mitigate the risks associated with developing such concepts. A systematic methodology to identify potential advanced propulsion concepts and assess their robustness is necessary to reduce the risk of developing advanced propulsion concepts. Existing advanced design methodologies have evaluated the robustness of technologies or concepts to variations in requirements, but they are not suitable to evaluate a large number of dissimilar concepts. Variations in requirements have been shown to impact the development of advanced propulsion concepts, and any method designed to evaluate these concepts must incorporate the possible variations of the requirements into the assessment. In order to do so, a methodology was formulated to be capable of accounting for two aspects of the problem. First, it had to systemically identify a probabilistic distribution for the future requirements. Such a distribution would allow decision-makers to quantify the uncertainty introduced by variations in requirements. Second, the methodology must be able to assess the robustness of the propulsion concepts as a function of that distribution. This dissertation describes in depth these enabling elements and proceeds to synthesize them into a new method, the Evolving Requirements Technology Assessment (ERTA). As a proof of concept, the ERTA method was used to evaluate and compare advanced propulsion systems that will be capable of powering a hurricane tracking, High Altitude, Long Endurance (HALE) unmanned aerial vehicle (UAV). The use of the ERTA methodology to assess HALE UAV propulsion concepts demonstrated that potential variations in requirements do significantly impact the assessment and selection of propulsion concepts. The proof of concept also demonstrated that traditional forecasting techniques, such as the cross impact analysis, could be used to forecast the requirements for advanced propulsion concepts probabilistically. "Fitness", a measure of relative goodness, was used to evaluate the concepts. Finally, stochastic optimizations were used to evaluate the propulsion concepts across the range of requirement sets that were considered.
Yin, Delu; Yin, Tao; Yang, Huiming; Xin, Qianqian; Wang, Lihong; Li, Ninyan; Ding, Xiaoyan; Chen, Bowen
2016-12-07
A shortage of community health professionals has been a crucial issue hindering the development of CHS. Various methods have been established to calculate health workforce requirements. This study aimed to use an economic-research-based approach to calculate the number of community health professionals required to provide community health services in the Xicheng District of Beijing and then assess current staffing levels against this ideal. Using questionnaires, we collected relevant data from 14 community health centers in the Xicheng District, including resident population, number of different health services provided, and service volumes. Through 36 interviews with family doctors, nurses, and public health workers, and six focus groups, we were able to calculate the person-time (equivalent value) required for each community health service. Field observations were conducted to verify the duration. In the 14 community health centers in Xicheng District, 1752 health workers were found in our four categories, serving a population of 1.278 million. Total demand for the community health service outstripped supply for doctors, nurses, and public health workers, but not other professionals. The method suggested that to properly serve the study population an additional 64 family doctors, 40 nurses, and 753 public health workers would be required. Our calculations indicate that significant numbers of new health professionals are required to deliver community health services. We established time standards in minutes (equivalent value) for each community health service activity, which could be applied elsewhere in China by government planners and civil society advocates.
Sagayama, Hiroyuki; Kondo, Emi; Shiose, Keisuke; Yamada, Yosuke; Motonaga, Keiko; Ouchi, Shiori; Kamei, Akiko; Osawa, Takuya; Nakajima, Kohei; Takahashi, Hideyuki; Higaki, Yasuki; Tanaka, Hiroaki
2017-01-01
Estimated energy requirements (EERs) are important for sports based on body weight classifications to aid in weight management. The basis for establishing EERs varies and includes self-reported energy intake (EI), predicted energy expenditure, and measured daily energy expenditure. Currently, however, no studies have been performed with male wrestlers using the highly accurate and precise doubly labeled water (DLW) method to estimate energy and fluid requirement. The primary aim of this study was to compare total energy expenditure (TEE), self-reported EI, and the difference in collegiate wrestlers during a normal training period using the DLW method. The secondary aims were to measure the water turnover and the physical activity level (PAL) of the athletes, and to examine the accuracy of two currently used equations to predict EER. Ten healthy males (age, 20.4±0.5 y) belonging to the East-Japan college league participated in this study. TEE was measured using the DLW method, and EI was assessed with self-reported dietary records for ~1 wk. There was a significant difference between TEE (17.9±2.5 MJ•d -1 [4,283±590 kcal•d -1 ]) and self-reported EI (14.4±3.3 MJ•d -1 [3,446±799 kcal•d -1 ]), a difference of 19%. The water turnover was 4.61±0.73 L•d -1 . The measured PAL (2.6±0.3) was higher than two predicted values during the training season and thus the two EER prediction equations produced underestimated values relative to DLW. We found that previous EERs were underestimating requirements in collegiate wrestlers and that those estimates should be revised.
Mesh Convergence Requirements for Composite Damage Models
NASA Technical Reports Server (NTRS)
Davila, Carlos G.
2016-01-01
The ability of the finite element method to accurately represent the response of objects with intricate geometry and loading renders the finite element method as an extremely versatile analysis technique for structural analysis. Finite element analysis is routinely used in industry to calculate deflections, stress concentrations, natural frequencies, buckling loads, and much more. The method works by discretizing complex problems into smaller, simpler approximations that are valid over small uniform domains. For common analyses, the maximum size of the elements that can be used is often be determined by experience. However, to verify the quality of a solution, analyses with several levels of mesh refinement should be performed to ensure that the solution has converged. In recent years, the finite element method has been used to calculate the resistance of structures, and in particular that of composite structures. A number of techniques such as cohesive zone modeling, the virtual crack closure technique, and continuum damage modeling have emerged that can be used to predict cracking, delaminations, fiber failure, and other composite damage modes that lead to structural collapse. However, damage models present mesh refinement requirements that are not well understood. In this presentation, we examine different mesh refinement issues related to the representation of damage in composite materials. Damage process zone sizes and their corresponding mesh requirements will be discussed. The difficulties of modeling discontinuities and the associated need for regularization techniques will be illustrated, and some unexpected element size constraints will be presented. Finally, some of the difficulties in constructing models of composite structures capable of predicting transverse matrix cracking will be discussed. It will be shown that to predict the initiation and propagation of transverse matrix cracks, their density, and their saturation may require models that are significantly more refined than those that have been contemplated in the past.
Easy Leaf Area: Automated digital image analysis for rapid and accurate measurement of leaf area.
Easlon, Hsien Ming; Bloom, Arnold J
2014-07-01
Measurement of leaf areas from digital photographs has traditionally required significant user input unless backgrounds are carefully masked. Easy Leaf Area was developed to batch process hundreds of Arabidopsis rosette images in minutes, removing background artifacts and saving results to a spreadsheet-ready CSV file. • Easy Leaf Area uses the color ratios of each pixel to distinguish leaves and calibration areas from their background and compares leaf pixel counts to a red calibration area to eliminate the need for camera distance calculations or manual ruler scale measurement that other software methods typically require. Leaf areas estimated by this software from images taken with a camera phone were more accurate than ImageJ estimates from flatbed scanner images. • Easy Leaf Area provides an easy-to-use method for rapid measurement of leaf area and nondestructive estimation of canopy area from digital images.
Comparative analysis of techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Hitt, E. F.; Bridgman, M. S.; Robinson, A. C.
1981-01-01
Performability analysis is a technique developed for evaluating the effectiveness of fault-tolerant computing systems in multiphase missions. Performability was evaluated for its accuracy, practical usefulness, and relative cost. The evaluation was performed by applying performability and the fault tree method to a set of sample problems ranging from simple to moderately complex. The problems involved as many as five outcomes, two to five mission phases, permanent faults, and some functional dependencies. Transient faults and software errors were not considered. A different analyst was responsible for each technique. Significantly more time and effort were required to learn performability analysis than the fault tree method. Performability is inherently as accurate as fault tree analysis. For the sample problems, fault trees were more practical and less time consuming to apply, while performability required less ingenuity and was more checkable. Performability offers some advantages for evaluating very complex problems.
Stabilization of glucose-oxidase in the graphene paste for screen-printed glucose biosensor
NASA Astrophysics Data System (ADS)
Pepłowski, Andrzej; Janczak, Daniel; Jakubowska, Małgorzata
2015-09-01
Various methods and materials for enzyme stabilization within screen-printed graphene sensor were analyzed. Main goal was to develop technology allowing immediate printing of the biosensors in single printing process. Factors being considered were: toxicity of the materials used, ability of the material to be screen-printed (squeezed through the printing mesh) and temperatures required in the fabrication process. Performance of the examined sensors was measured using chemical amperometry method, then appropriate analysis of the measurements was conducted. The analysis results were then compared with the medical requirements. Parameters calculated were: correlation coefficient between concentration of the analyte and the measured electrical current (0.986) and variation coefficient for the particular concentrations of the analyte used as the calibration points. Variation of the measured values was significant only in ranges close to 0, decreasing for the concentrations of clinical importance. These outcomes justify further development of the graphene-based biosensors fabricated through printing techniques.
NASA Astrophysics Data System (ADS)
Li, Ruoping; Yang, Jingliang; Han, Junhe; Liu, Junhui; Huang, Mingju
2017-04-01
A Raman method employing silver nanoparticle (Ag NP) monolayer film as Surface-enhanced Raman Scattering (SERS) substrate was presented to rapidly detect melamine in milk. The Ag NPs with 80 nm diameter were modified by polyvinylpyrrolidone to improve their uniformity and chemical stability. The treatment procedure of liquid milk required only addition of acetic acid and centrifugation, and required time is less than 15 min. The Ag NP monolayer film significantly enhanced Raman signal from melamine and allowed experimentally reproducible determination of the melamine concentration. A good linear relationship (R2=0.994) between the concentration and Raman peak intensity of melamine at 681 cm-1 was obtained for melamine concentrations between 0.10 mg L-1 and 5.00 mg L-1. This implies that this method can detect melamine concentrations below 1.0 mg L-1, the concentration currently considered unsafe.
A Bitslice Implementation of Anderson's Attack on A5/1
NASA Astrophysics Data System (ADS)
Bulavintsev, Vadim; Semenov, Alexander; Zaikin, Oleg; Kochemazov, Stepan
2018-03-01
The A5/1 keystream generator is a part of Global System for Mobile Communications (GSM) protocol, employed in cellular networks all over the world. Its cryptographic resistance was extensively analyzed in dozens of papers. However, almost all corresponding methods either employ a specific hardware or require an extensive preprocessing stage and significant amounts of memory. In the present study, a bitslice variant of Anderson's Attack on A5/1 is implemented. It requires very little computer memory and no preprocessing. Moreover, the attack can be made even more efficient by harnessing the computing power of modern Graphics Processing Units (GPUs). As a result, using commonly available GPUs this method can quite efficiently recover the secret key using only 64 bits of keystream. To test the performance of the implementation, a volunteer computing project was launched. 10 instances of A5/1 cryptanalysis have been successfully solved in this project in a single week.
Banasiuk, Rafał; Frackowiak, Joanna E; Krychowiak, Marta; Matuszewska, Marta; Kawiak, Anna; Ziabka, Magdalena; Lendzion-Bielun, Zofia; Narajczyk, Magdalena; Krolicka, Aleksandra
2016-01-01
A fast, economical, and reproducible method for nanoparticle synthesis has been developed in our laboratory. The reaction is performed in an aqueous environment and utilizes light emitted by commercially available 1 W light-emitting diodes (λ =420 nm) as the catalyst. This method does not require nanoparticle seeds or toxic chemicals. The irradiation process is carried out for a period of up to 10 minutes, significantly reducing the time required for synthesis as well as environmental impact. By modulating various reaction parameters silver nanoparticles were obtained, which were predominantly either spherical or cubic. The produced nanoparticles demonstrated strong antimicrobial activity toward the examined bacterial strains. Additionally, testing the effect of silver nanoparticles on the human keratinocyte cell line and human peripheral blood mononuclear cells revealed that their cytotoxicity may be limited by modulating the employed concentrations of nanoparticles. PMID:26855570
The Evolution of Genetics: Alzheimer's and Parkinson's Diseases.
Singleton, Andrew; Hardy, John
2016-06-15
Genetic discoveries underlie the majority of the current thinking in neurodegenerative disease. This work has been driven by the significant gains made in identifying causal mutations; however, the translation of genetic causes of disease into pathobiological understanding remains a challenge. The application of a second generation of genetics methods allows the dissection of moderate and mild genetic risk factors for disease. This requires new thinking in two key areas: what constitutes proof of pathogenicity, and how do we translate these findings to biological understanding. Here we describe the progress and ongoing evolution in genetics. We describe a view that rejects the tradition that genetic proof has to be absolute before functional characterization and centers on a multi-dimensional approach integrating genetics, reference data, and functional work. We also argue that these challenges cannot be efficiently met by traditional hypothesis-driven methods but that high content system-wide efforts are required. Published by Elsevier Inc.
Faulk, Clinton E.; Harrell, Kelly M.; Lawson, Luan E.; Moore, Daniel P.
2016-01-01
Background. A Required Fourth-Year Medical Student Physical Medicine and Rehabilitation (PM&R) Clerkship was found to increase students' knowledge of PM&R; however the students' overall rotation evaluations were consistently lower than the other 8 required clerkships at the medical school. Objective. To describe the impact of a revised curriculum based upon Entrustable Professional Activities and focusing on basic pain management, musculoskeletal care, and neurology. Setting. Academic Medical Center. Participants. 73 fourth-year medical students. Methods. The curriculum changes included a shift in the required readings from rehabilitation specific topics toward more general content in the areas of clinical neurology and musculoskeletal care. Hands-on workshops on neurological and musculoskeletal physical examination techniques, small group case-based learning, an anatomy clinical correlation lecture, and a lecture on pain management were integrated into the curriculum. Main Outcome Measurements. Student evaluations of the clerkship. Results. Statistically significant improvements were found in the students' evaluations of usefulness of lecturers, development of patient interviewing skills, and diagnostic and patient management skills (p ≤ 0.05). Conclusions. This study suggests that students have a greater satisfaction with a required PM&R clerkship when lecturers utilize a variety of pedagogic methods to teach basic pain, neurology and musculoskeletal care skills in the rehabilitation setting rather than rehabilitation specific content. PMID:28025624
Norbury, John W; Faulk, Clinton E; Harrell, Kelly M; Lawson, Luan E; Moore, Daniel P
2016-01-01
Background . A Required Fourth-Year Medical Student Physical Medicine and Rehabilitation (PM&R) Clerkship was found to increase students' knowledge of PM&R; however the students' overall rotation evaluations were consistently lower than the other 8 required clerkships at the medical school. Objective . To describe the impact of a revised curriculum based upon Entrustable Professional Activities and focusing on basic pain management, musculoskeletal care, and neurology. Setting . Academic Medical Center. Participants . 73 fourth-year medical students. Methods . The curriculum changes included a shift in the required readings from rehabilitation specific topics toward more general content in the areas of clinical neurology and musculoskeletal care. Hands-on workshops on neurological and musculoskeletal physical examination techniques, small group case-based learning, an anatomy clinical correlation lecture, and a lecture on pain management were integrated into the curriculum. Main Outcome Measurements . Student evaluations of the clerkship. Results . Statistically significant improvements were found in the students' evaluations of usefulness of lecturers, development of patient interviewing skills, and diagnostic and patient management skills ( p ≤ 0.05). Conclusions . This study suggests that students have a greater satisfaction with a required PM&R clerkship when lecturers utilize a variety of pedagogic methods to teach basic pain, neurology and musculoskeletal care skills in the rehabilitation setting rather than rehabilitation specific content.
Design and Development of a Regenerative Blower for EVA Suit Ventilation
NASA Technical Reports Server (NTRS)
Izenson, Michael G.; Chen, Weibo; Hill, Roger W.; Phillips, Scott D.; Paul, Heather L.
2011-01-01
Ventilation subsystems in future space suits require a dedicated ventilation fan. The unique requirements for the ventilation fan - including stringent safety requirements and the ability to increase output to operate in buddy mode - combine to make a regenerative blower an attractive choice. This paper describes progress in the design, development, and testing of a regenerative blower designed to meet requirements for ventilation subsystems in future space suits. We have developed analysis methods for the blower s complex, internal flows and identified impeller geometries that enable significant improvements in blower efficiency. We verified these predictions by test, measuring aerodynamic efficiencies of 45% at operating conditions that correspond to the ventilation fan s design point. We have developed a compact motor/controller to drive the blower efficiently at low rotating speed (4500 rpm). Finally, we have assembled a low-pressure oxygen test loop to demonstrate the blower s reliability under prototypical conditions.