The Typicality Ranking Task: A New Method to Derive Typicality Judgments from Children.
Djalal, Farah Mutiasari; Ameel, Eef; Storms, Gert
2016-01-01
An alternative method for deriving typicality judgments, applicable in young children that are not familiar with numerical values yet, is introduced, allowing researchers to study gradedness at younger ages in concept development. Contrary to the long tradition of using rating-based procedures to derive typicality judgments, we propose a method that is based on typicality ranking rather than rating, in which items are gradually sorted according to their typicality, and that requires a minimum of linguistic knowledge. The validity of the method is investigated and the method is compared to the traditional typicality rating measurement in a large empirical study with eight different semantic concepts. The results show that the typicality ranking task can be used to assess children's category knowledge and to evaluate how this knowledge evolves over time. Contrary to earlier held assumptions in studies on typicality in young children, our results also show that preference is not so much a confounding variable to be avoided, but that both variables are often significantly correlated in older children and even in adults.
The Typicality Ranking Task: A New Method to Derive Typicality Judgments from Children
Ameel, Eef; Storms, Gert
2016-01-01
An alternative method for deriving typicality judgments, applicable in young children that are not familiar with numerical values yet, is introduced, allowing researchers to study gradedness at younger ages in concept development. Contrary to the long tradition of using rating-based procedures to derive typicality judgments, we propose a method that is based on typicality ranking rather than rating, in which items are gradually sorted according to their typicality, and that requires a minimum of linguistic knowledge. The validity of the method is investigated and the method is compared to the traditional typicality rating measurement in a large empirical study with eight different semantic concepts. The results show that the typicality ranking task can be used to assess children’s category knowledge and to evaluate how this knowledge evolves over time. Contrary to earlier held assumptions in studies on typicality in young children, our results also show that preference is not so much a confounding variable to be avoided, but that both variables are often significantly correlated in older children and even in adults. PMID:27322371
How to Compare the Security Quality Requirements Engineering (SQUARE) Method with Other Methods
2007-08-01
Attack Trees for Modeling and Analysis 10 2.8 Misuse and Abuse Cases 10 2.9 Formal Methods 11 2.9.1 Software Cost Reduction 12 2.9.2 Common...modern or efficient techniques. • Requirements analysis typically is either not performed at all (identified requirements are directly specified without...any analysis or modeling) or analysis is restricted to functional re- quirements and ignores quality requirements, other nonfunctional requirements
Attention, Working Memory, and Grammaticality Judgment in Typical Young Adults
ERIC Educational Resources Information Center
Smith, Pamela A.
2011-01-01
Purpose: To examine resource allocation and sentence processing, this study examined the effects of auditory distraction on grammaticality judgment (GJ) of sentences varied by semantics (reversibility) and short-term memory requirements. Method: Experiment 1: Typical young adult females (N = 60) completed a whole-sentence GJ task in distraction…
Direct imaging of small scatterers using reduced time dependent data
NASA Astrophysics Data System (ADS)
Cakoni, Fioralba; Rezac, Jacob D.
2017-06-01
We introduce qualitative methods for locating small objects using time dependent acoustic near field waves. These methods have reduced data collection requirements compared to typical qualitative imaging techniques. In particular, we only collect scattered field data in a small region surrounding the location from which an incident field was transmitted. The new methods are partially theoretically justified and numerical simulations demonstrate their efficacy. We show that these reduced data techniques give comparable results to methods which require full multistatic data and that these time dependent methods require less scattered field data than their time harmonic analogs.
Research notes : alternate method for pothole patching.
DOT National Transportation Integrated Search
1998-09-01
Typically, throw and roll pothole patches will likely fail before the pavement is resurfaced or rehabilitated. Alternatively, semi-permanent repairs are time consuming and require more people and added lane closure time. An alternate method is spray ...
An Elephant in the Room: Bias in Evaluating a Required Quantitative Methods Course
ERIC Educational Resources Information Center
Fletcher, Joseph F.; Painter-Main, Michael A.
2014-01-01
Undergraduate Political Science programs often require students to take a quantitative research methods course. Such courses are typically among the most poorly rated. This can be due, in part, to the way in which courses are evaluated. Students are generally asked to provide an overall rating, which, in turn, is widely used by students, faculty,…
Methods to Register Models and Input/Output Parameters for Integrated Modeling
Significant resources can be required when constructing integrated modeling systems. In a typical application, components (e.g., models and databases) created by different developers are assimilated, requiring the framework’s functionality to bridge the gap between the user’s kno...
Detecting peroxiredoxin hyperoxidation by one-dimensional isoelectric focusing.
Cao, Zhenbo; Bulleid, Neil J
The activity of typical 2-cys peroxiredoxin (Prxs) can be regulated by hyperoxidation with a consequent loss of redox activity. Here we developed a simple assay to monitor the level of hyperoxidation of different typical 2-cys prxs simultaneously. This assay only requires standard equipment and can compare different samples on the same gel. It requires much less time than conventional 2D gels and gives more information than Western blotting with an antibody specific for hyperoxidized peroxiredoxin. This method could also be used to monitor protein modification with a charge difference such as phosphorylation.
Current methods invariably require sample concentration, typically solid-phase extraction, so as to be amendable for measurement at ambient concentration levels. Such methods (i.e. EPA Method 544) are only validated for a limited number of the known variants where standards are ...
Robust Requirements Tracing via Internet Search Technology: Improving an IV and V Technique. Phase 2
NASA Technical Reports Server (NTRS)
Hayes, Jane; Dekhtyar, Alex
2004-01-01
There are three major objectives to this phase of the work. (1) Improvement of Information Retrieval (IR) methods for Independent Verification and Validation (IV&V) requirements tracing. Information Retrieval methods are typically developed for very large (order of millions - tens of millions and more documents) document collections and therefore, most successfully used methods somewhat sacrifice precision and recall in order to achieve efficiency. At the same time typical IR systems treat all user queries as independent of each other and assume that relevance of documents to queries is subjective for each user. The IV&V requirements tracing problem has a much smaller data set to operate on, even for large software development projects; the set of queries is predetermined by the high-level specification document and individual requirements considered as query input to IR methods are not necessarily independent from each other. Namely, knowledge about the links for one requirement may be helpful in determining the links of another requirement. Finally, while the final decision on the exact form of the traceability matrix still belongs to the IV&V analyst, his/her decisions are much less arbitrary than those of an Internet search engine user. All this suggests that the information available to us in the framework of the IV&V tracing problem can be successfully leveraged to enhance standard IR techniques, which in turn would lead to increased recall and precision. We developed several new methods during Phase II; (2) IV&V requirements tracing IR toolkit. Based on the methods developed in Phase I and their improvements developed in Phase II, we built a toolkit of IR methods for IV&V requirements tracing. The toolkit has been integrated, at the data level, with SAIC's SuperTracePlus (STP) tool; (3) Toolkit testing. We tested the methods included in the IV&V requirements tracing IR toolkit on a number of projects.
Code of Federal Regulations, 2012 CFR
2012-04-01
...—Structural Glued Laminated Timber—ANSI/AITC A190.1-1992. Construction and Industrial Plywood (With Typical... shall comply with these requirements. (3) Engineering analysis and testing methods contained in these references shall be utilized to judge conformance with accepted engineering practices required in § 3280.303...
Code of Federal Regulations, 2011 CFR
2011-04-01
...—Structural Glued Laminated Timber—ANSI/AITC A190.1-1992. Construction and Industrial Plywood (With Typical... shall comply with these requirements. (3) Engineering analysis and testing methods contained in these references shall be utilized to judge conformance with accepted engineering practices required in § 3280.303...
Simplified three microphone acoustic test method
USDA-ARS?s Scientific Manuscript database
Accepted acoustic testing standards are available; however, they require specialized hardware and software that are typically out of reach economically to the occasional practitioner. What is needed is a simple and inexpensive screening method that could provide a quick comparison for rapid identifi...
Novel Maximum-based Timing Acquisition for Spread-Spectrum Communications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sibbetty, Taylor; Moradiz, Hussein; Farhang-Boroujeny, Behrouz
This paper proposes and analyzes a new packet detection and timing acquisition method for spread spectrum systems. The proposed method provides an enhancement over the typical thresholding techniques that have been proposed for direct sequence spread spectrum (DS-SS). The effective implementation of thresholding methods typically require accurate knowledge of the received signal-to-noise ratio (SNR), which is particularly difficult to estimate in spread spectrum systems. Instead, we propose a method which utilizes a consistency metric of the location of maximum samples at the output of a filter matched to the spread spectrum waveform to achieve acquisition, and does not require knowledgemore » of the received SNR. Through theoretical study, we show that the proposed method offers a low probability of missed detection over a large range of SNR with a corresponding probability of false alarm far lower than other methods. Computer simulations that corroborate our theoretical results are also presented. Although our work here has been motivated by our previous study of a filter bank multicarrier spread-spectrum (FB-MC-SS) system, the proposed method is applicable to DS-SS systems as well.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-12
... subsistence uses (where relevant), and if the permissible methods of taking and requirements pertaining to the... mouth of Chapman Bay. Pilings would be removed by vibratory hammer extraction methods and structures... day would be removed via vibratory hammer extraction methods. Typically the hammer vibrates for less...
Simplified through-transmission test method for determination of a material's acoustic properties
USDA-ARS?s Scientific Manuscript database
Accepted acoustic testing standards are available; however, they require specialized hardware and software that are typically out of reach economically to the occasional practitioner. What is needed is a simple and inexpensive screening method that can provide a quick comparison for rapid identifica...
Rapid qualitative and quantitative analysis of proanthocyanidin oligomers and polymers by UPLC-MS/MS
USDA-ARS?s Scientific Manuscript database
Proanthocyanidins (PAs) are a structurally complex and bioactive group of tannins. Detailed analysis of PA concentration, composition, and structure typically requires the use of one or more time-consuming analytical methods. For example, the commonly employed thiolysis and phloroglucinolysis method...
Assessing User Needs and Requirements for Assistive Robots at Home.
Werner, Katharina; Werner, Franz
2015-01-01
'Robots in healthcare' is a very trending topic. This paper gives an overview of currently and commonly used methods to gather user needs and requirements in research projects in the field of assistive robotics. Common strategies between authors are presented as well as examples of exceptions, which can help future researchers to find methods suitable for their own work. Typical problems of the field are discussed and partial solutions are proposed.
Evaluation of four methods for estimating leaf area of isolated trees
P.J. Peper; E.G. McPherson
2003-01-01
The accurate modeling of the physiological and functional processes of urban forests requires information on the leaf area of urban tree species. Several non-destructive, indirect leaf area sampling methods have shown good performance for homogenous canopies. These methods have not been evaluated for use in urban settings where trees are typically isolated and...
Explorations in Using Arts-Based Self-Study Methods
ERIC Educational Resources Information Center
Samaras, Anastasia P.
2010-01-01
Research methods courses typically require students to conceptualize, describe, and present their research ideas in writing. In this article, the author describes her exploration in using arts-based techniques for teaching research to support the development of students' self-study research projects. The pedagogical approach emerged from the…
Acid Rain Analysis by Standard Addition Titration.
ERIC Educational Resources Information Center
Ophardt, Charles E.
1985-01-01
The standard addition titration is a precise and rapid method for the determination of the acidity in rain or snow samples. The method requires use of a standard buret, a pH meter, and Gran's plot to determine the equivalence point. Experimental procedures used and typical results obtained are presented. (JN)
Developing a Standard Based Advanced Lab Course that Fulfills COM3 Requirements
NASA Astrophysics Data System (ADS)
Michalak, Rudi
2015-03-01
An advanced physics lab has been developed into a course that fulfills the requirements for a university studies program `COM3' course using Standard Teaching (ST) methods. The COM3 course is a capstone course under the new USP2015 study requirements for all majors. It replaces the WC writing requirement, typically filled in the English Dept., and adds the teaching of oral and digital communication skills. ST is a method that replaces typical assessments (homework / exam grades) with new assessments that measure certain specified learning outcomes. In combination with oral assessments and an oral final exam, the ST proves an efficient tool to implement the USP Learning Outcomes into a physics course. COM3 requires an unprecedented seven learning outcomes in a single course. Variety of learning outcomes: interdisciplinary goals, levels of writing (with drafting steps), organizational structure, standard language metrics, research and presentation deliverance skills, appropriate addressing of a variety of audiences, etc. With other assessment approaches than ST this variety would be difficult to meet in a physics course. An extended ST rubric has been developed for this course and will be presented and discussed in some detail.
RECOMMENDED METHODS FOR AMBIENT AIR MONITORING OF NO, NO2, NOY, AND INDIVIDUAL NOZ SPECIES
The most appropriate monitoring methods for reactive nitrogen oxides are identified subject to the requirements for diagnostic testing of air quality simulation models. Measurements must be made over 1 h or less and with an uncertainty of - 20% (10% for NO2) over a typical am...
A successful trap design for capturing large terrestrial snakes
Shirley J. Burgdorf; D. Craig Rudolph; Richard N. Conner; Daniel Saenz; Richard R. Schaefer
2005-01-01
Large scale trapping protocols for snakes can be expensive and require large investments of personnel and time. Typical methods, such as pitfall and small funnel traps, are not useful or suitable for capturing large snakes. A method was needed to survey multiple blocks of habitat for the Louisiana Pine Snake (Pituophis ruthveni), throughout its...
A review of 3D first-pass, whole-heart, myocardial perfusion cardiovascular magnetic resonance.
Fair, Merlin J; Gatehouse, Peter D; DiBella, Edward V R; Firmin, David N
2015-08-01
A comprehensive review is undertaken of the methods available for 3D whole-heart first-pass perfusion (FPP) and their application to date, with particular focus on possible acceleration techniques. Following a summary of the parameters typically desired of 3D FPP methods, the review explains the mechanisms of key acceleration techniques and their potential use in FPP for attaining 3D acquisitions. The mechanisms include rapid sequences, non-Cartesian k-space trajectories, reduced k-space acquisitions, parallel imaging reconstructions and compressed sensing. An attempt is made to explain, rather than simply state, the varying methods with the hope that it will give an appreciation of the different components making up a 3D FPP protocol. Basic estimates demonstrating the required total acceleration factors in typical 3D FPP cases are included, providing context for the extent that each acceleration method can contribute to the required imaging speed, as well as potential limitations in present 3D FPP literature. Although many 3D FPP methods are too early in development for the type of clinical trials required to show any clear benefit over current 2D FPP methods, the review includes the small but growing quantity of clinical research work already using 3D FPP, alongside the more technical work. Broader challenges concerning FPP such as quantitative analysis are not covered, but challenges with particular impact on 3D FPP methods, particularly with regards to motion effects, are discussed along with anticipated future work in the field.
Mitigating reentry radio blackout by using a traveling magnetic field
NASA Astrophysics Data System (ADS)
Zhou, Hui; Li, Xiaoping; Xie, Kai; Liu, Yanming; Yu, Yuanyuan
2017-10-01
A hypersonic flight or a reentry vehicle is surrounded by a plasma layer that prevents electromagnetic wave transmission, which results in radio blackout. The magnetic-window method is considered a promising means to mitigate reentry communication blackout. However, the real application of this method is limited because of the need for strong magnetic fields. To reduce the required magnetic field strength, a novel method that applies a traveling magnetic field (TMF) is proposed in this study. A mathematical model based on magneto-hydrodynamic theory is adopted to analyze the effect of TMF on plasma. The mitigating effects of the TMF on the blackout of typical frequency bands, including L-, S-, and C-bands, are demonstrated. Results indicate that a significant reduction of plasma density occurs in the magnetic-window region by applying a TMF, and the reduction ratio is positively correlated with the velocity of the TMF. The required traveling velocities for eliminating the blackout of the Global Positioning System (GPS) and the typical telemetry system are also discussed. Compared with the constant magnetic-window method, the TMF method needs lower magnetic field strength and is easier to realize in the engineering field.
Least-cost transportation planning in ODOT : feasibility report.
DOT National Transportation Integrated Search
1995-03-01
Least-Cost Planning or Integrated Resource Planning is used in the electric utility industry to broaden the scope of choices to meet service requirements. This typically includes methods to reduce to demands for electricity as well the more tradition...
46 CFR 160.010-4 - General requirements for buoyant apparatus.
Code of Federal Regulations, 2014 CFR
2014-10-01
... light twine. (h) Each peripheral body type buoyant apparatus without a net or platform on the inside... pigmented in a dark color. A typical method of securing lifelines and pendants to straps of webbing is shown...
46 CFR 160.010-4 - General requirements for buoyant apparatus.
Code of Federal Regulations, 2013 CFR
2013-10-01
... light twine. (h) Each peripheral body type buoyant apparatus without a net or platform on the inside... pigmented in a dark color. A typical method of securing lifelines and pendants to straps of webbing is shown...
46 CFR 160.010-4 - General requirements for buoyant apparatus.
Code of Federal Regulations, 2012 CFR
2012-10-01
... light twine. (h) Each peripheral body type buoyant apparatus without a net or platform on the inside... pigmented in a dark color. A typical method of securing lifelines and pendants to straps of webbing is shown...
Flight Guidance System Requirements Specification
NASA Technical Reports Server (NTRS)
Miller, Steven P.; Tribble, Alan C.; Carlson, Timothy M.; Danielson, Eric J.
2003-01-01
This report describes a requirements specification written in the RSML-e language for the mode logic of a Flight Guidance System of a typical regional jet aircraft. This model was created as one of the first steps in a five-year project sponsored by the NASA Langley Research Center, Rockwell Collins Inc., and the Critical Systems Research Group of the University of Minnesota to develop new methods and tools to improve the safety of avionics designs. This model will be used to demonstrate the application of a variety of methods and techniques, including safety analysis of system and subsystem requirements, verification of key properties using theorem provers and model checkers, identification of potential sources mode confusion in system designs, partitioning of applications based on the criticality of system hazards, and autogeneration of avionics quality code. While this model is representative of the mode logic of a typical regional jet aircraft, it does not describe an actual or planned product. Several aspects of a full Flight Guidance System, such as recovery from failed sensors, have been omitted, and no claims are made regarding the accuracy or completeness of this specification.
Structural analysis for preliminary design of High Speed Civil Transport (HSCT)
NASA Technical Reports Server (NTRS)
Bhatia, Kumar G.
1992-01-01
In the preliminary design environment, there is a need for quick evaluation of configuration and material concepts. The simplified beam representations used in the subsonic, high aspect ratio wing platform are not applicable for low aspect ratio configurations typical of supersonic transports. There is a requirement to develop methods for efficient generation of structural arrangement and finite element representation to support multidisciplinary analysis and optimization. In addition, empirical data bases required to validate prediction methods need to be improved for high speed civil transport (HSCT) type configurations.
NASA Technical Reports Server (NTRS)
Ferrenberg, A.; Hunt, K.; Duesberg, J.
1985-01-01
The primary objective was the obtainment of atomization and mixing performance data for a variety of typical liquid oxygen/hydrocarbon injector element designs. Such data are required to establish injector design criteria and to provide critical inputs to liquid rocket engine combustor performance and stability analysis, and computational codes and methods. Deficiencies and problems with the atomization test equipment were identified, and action initiated to resolve them. Test results of the gas/liquid mixing tests indicated that an assessment of test methods was required. A series of 71 liquid/liquid tests were performed.
NASA Astrophysics Data System (ADS)
Altıparmak, Hamit; Al Shahadat, Mohamad; Kiani, Ehsan; Dimililer, Kamil
2018-04-01
Robotic agriculture requires smart and doable techniques to substitute the human intelligence with machine intelligence. Strawberry is one of the important Mediterranean product and its productivity enhancement requires modern and machine-based methods. Whereas a human identifies the disease infected leaves by his eye, the machine should also be capable of vision-based disease identification. The objective of this paper is to practically verify the applicability of a new computer-vision method for discrimination between the healthy and disease infected strawberry leaves which does not require neural network or time consuming trainings. The proposed method was tested under outdoor lighting condition using a regular DLSR camera without any particular lens. Since the type and infection degree of disease is approximated a human brain a fuzzy decision maker classifies the leaves over the images captured on-site having the same properties of human vision. Optimizing the fuzzy parameters for a typical strawberry production area at a summer mid-day in Cyprus produced 96% accuracy for segmented iron deficiency and 93% accuracy for segmented using a typical human instant classification approximation as the benchmark holding higher accuracy than a human eye identifier. The fuzzy-base classifier provides approximate result for decision making on the leaf status as if it is healthy or not.
Real-time combustion monitoring of PCDD/F indicators by REMPI-TOFMS
Analyses for polychlorinated dibenzodioxin and dibenzofuran (PCDD/F) emissions typically require a 4 h extractive sample taken on an annual or less frequent basis. This results in a potentially minimally representative monitoring scheme. More recently, methods for continual sampl...
Analytical difficulties facing today's regulatory laboratories: issues in method validation.
MacNeil, James D
2012-08-01
The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.
Engineering large-scale agent-based systems with consensus
NASA Technical Reports Server (NTRS)
Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.
1994-01-01
The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.
DOT National Transportation Integrated Search
2008-09-01
The Resilient Modulus (Mr) of pavement materials and subgrades is an important input : parameter for the design of pavement structures. The Repeated Loading Triaxial (RLT) test : typically determines Mr. However, the RLT test requires well trained pe...
Recursive Deadbeat Controller Design
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh Q.
1997-01-01
This paper presents a recursive algorithm for a deadbeat predictive controller design. The method combines together the concepts of system identification and deadbeat controller designs. It starts with the multi-step output prediction equation and derives the control force in terms of past input and output time histories. The formulation thus derived satisfies simultaneously system identification and deadbeat controller design requirements. As soon as the coefficient matrices are identified satisfying the output prediction equation, no further work is required to compute the deadbeat control gain matrices. The method can be implemented recursively just as any typical recursive system identification techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W.; Keates, Steven
This protocol is intended to describe the recommended method when evaluating the whole-building performance of new construction projects in the commercial sector. The protocol focuses on energy conservation measures (ECMs) or packages of measures where evaluators can analyze impacts using building simulation. These ECMs typically require the use of calibrated building simulations under Option D of the International Performance Measurement and Verification Protocol (IPMVP).
Optical image encryption by random shifting in fractional Fourier domains
NASA Astrophysics Data System (ADS)
Hennelly, B.; Sheridan, J. T.
2003-02-01
A number of methods have recently been proposed in the literature for the encryption of two-dimensional information by use of optical systems based on the fractional Fourier transform. Typically, these methods require random phase screen keys for decrypting the data, which must be stored at the receiver and must be carefully aligned with the received encrypted data. A new technique based on a random shifting, or jigsaw, algorithm is proposed. This method does not require the use of phase keys. The image is encrypted by juxtaposition of sections of the image in fractional Fourier domains. The new method has been compared with existing methods and shows comparable or superior robustness to blind decryption. Optical implementation is discussed, and the sensitivity of the various encryption keys to blind decryption is examined.
RELIGION AND DISASTER VICTIM IDENTIFICATION.
Levinson, Jay; Domb, Abraham J
2014-12-01
Disaster Victim Identification (DVI) is a triangle, the components of which are secular law, religious law and custom and professional methods. In cases of single non-criminal deaths, identification often rests with a hospital or a medical authority. When dealing with criminal or mass death incidents, the law, in many jurisdictions, assigns identification to the coroner/medical examiner, who typically uses professional methods and only answers the religious requirements of the deceased's next-of-kin according to his personal judgment. This article discusses religious considerations regarding scientific methods and their limitations, as well as the ethical issues involved in the government coroner/medical examiner's becoming involved in clarifying and answering the next-of-kin's religious requirements.
Imputation of unordered markers and the impact on genomic selection accuracy
USDA-ARS?s Scientific Manuscript database
Genomic selection, a breeding method that promises to accelerate rates of genetic gain, requires dense, genome-wide marker data. Genotyping-by-sequencing can generate a large number of de novo markers. However, without a reference genome, these markers are unordered and typically have a large propo...
Applying a Mixed Methods Framework to Differential Item Function Analyses
ERIC Educational Resources Information Center
Hitchcock, John H.; Johanson, George A.
2015-01-01
Understanding the reason(s) for Differential Item Functioning (DIF) in the context of measurement is difficult. Although identifying potential DIF items is typically a statistical endeavor, understanding the reasons for DIF (and item repair or replacement) might require investigations that can be informed by qualitative work. Such work is…
Class and Home Problems: Optimization Problems
ERIC Educational Resources Information Center
Anderson, Brian J.; Hissam, Robin S.; Shaeiwitz, Joseph A.; Turton, Richard
2011-01-01
Optimization problems suitable for all levels of chemical engineering students are available. These problems do not require advanced mathematical techniques, since they can be solved using typical software used by students and practitioners. The method used to solve these problems forces students to understand the trends for the different terms…
Rapid and potentially portable detection and quantification technologies for foodborne pathogens
USDA-ARS?s Scientific Manuscript database
Introduction Traditional microbial culture methods are able to detect and identify a single specific bacterium, but may require days or weeks and typically do not produce quantitative data. The quest for faster, quantitative results has spurred development of “rapid methods” which usually employ bio...
Business Models for Training and Performance Improvement Departments
ERIC Educational Resources Information Center
Carliner, Saul
2004-01-01
Although typically applied to entire enterprises, the concept of business models applies to training and performance improvement groups. Business models are "the method by which firm[s] build and use [their] resources to offer.. value." Business models affect the types of projects, services offered, skills required, business processes, and type of…
Simulation of Simple Controlled Processes with Dead-Time.
ERIC Educational Resources Information Center
Watson, Keith R.; And Others
1985-01-01
The determination of closed-loop response of processes containing dead-time is typically not covered in undergraduate process control, possibly because the solution by Laplace transforms requires the use of Pade approximation for dead-time, which makes the procedure lengthy and tedious. A computer-aided method is described which simplifies the…
Downed wood as seedbed: measurement and management guidelines
Mark J. Ducey; Jeffrey H. Gove
2000-01-01
Eastern hemlock has exacting germination requirements, and availability of suitable microsites for germination can limit the development of hemlock regeneration. A major contributor to those microsites is coarse woody debris. New methods for quantifying coarse woody debris have recently been developed that are complementary to strategies typically used in timber...
An Experimental Study of the Emergence of Human Communication Systems
ERIC Educational Resources Information Center
Galantucci, Bruno
2005-01-01
The emergence of human communication systems is typically investigated via 2 approaches with complementary strengths and weaknesses: naturalistic studies and computer simulations. This study was conducted with a method that combines these approaches. Pairs of participants played video games requiring communication. Members of a pair were…
Tableau Economique: Teaching Economics with a Tablet Computer
ERIC Educational Resources Information Center
Scott, Robert H., III
2011-01-01
The typical method of instruction in economics is chalk and talk. Economics courses often require writing equations and drawing graphs and charts, which are all best done in freehand. Unlike static PowerPoint presentations, tablet computers create dynamic nonlinear presentations. Wireless technology allows professors to write on their tablets and…
A simple method for the measurement of reflective foil emissivity
NASA Astrophysics Data System (ADS)
Ballico, M. J.; van der Ham, E. W. M.
2013-09-01
Reflective metal foil is widely used to reduce radiative heat transfer within the roof space of buildings. Such foils are typically mass-produced by vapor-deposition of a thin metallic coating onto a variety of substrates, ranging from plastic-coated reinforced paper to "bubble-wrap". Although the emissivity of such surfaces is almost negligible in the thermal infrared, typically less than 0.03, an insufficiently thick metal coating, or organic contamination of the surface, can significantly increase this value. To ensure that the quality of the installed insulation is satisfactory, Australian building code AS/NZS 4201.5:1994 requires a practical agreed method for measurement of the emissivity, and the standard ASTM-E408 is implied. Unfortunately this standard is not a "primary method" and requires the use of specified expensive apparatus and calibrated reference materials. At NMIA we have developed a simple primary technique, based on an apparatus to thermally modulate the sample and record the apparent modulation in infra-red radiance with commercially available radiation thermometers. The method achieves an absolute accuracy in the emissivity of approximately 0.004 (k=2). This paper theoretically analyses the equivalence between the thermal emissivity measured in this manner, the effective thermal emissivity in application, and the apparent emissivity measured in accordance with ASTM-E408.
A simple method for the measurement of reflective foil emissivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballico, M. J.; Ham, E. W. M. van der
Reflective metal foil is widely used to reduce radiative heat transfer within the roof space of buildings. Such foils are typically mass-produced by vapor-deposition of a thin metallic coating onto a variety of substrates, ranging from plastic-coated reinforced paper to 'bubble-wrap'. Although the emissivity of such surfaces is almost negligible in the thermal infrared, typically less than 0.03, an insufficiently thick metal coating, or organic contamination of the surface, can significantly increase this value. To ensure that the quality of the installed insulation is satisfactory, Australian building code AS/NZS 4201.5:1994 requires a practical agreed method for measurement of the emissivity,more » and the standard ASTM-E408 is implied. Unfortunately this standard is not a 'primary method' and requires the use of specified expensive apparatus and calibrated reference materials. At NMIA we have developed a simple primary technique, based on an apparatus to thermally modulate the sample and record the apparent modulation in infra-red radiance with commercially available radiation thermometers. The method achieves an absolute accuracy in the emissivity of approximately 0.004 (k=2). This paper theoretically analyses the equivalence between the thermal emissivity measured in this manner, the effective thermal emissivity in application, and the apparent emissivity measured in accordance with ASTM-E408.« less
Gore, Shane J; Marshall, Brendan M; Franklyn-Miller, Andrew D; Falvey, Eanna C; Moran, Kieran A
2016-06-01
When reporting a subject's mean movement pattern, it is important to ensure that reported values are representative of the subject's typical movement. While previous studies have used the mean of 3 trials, scientific justification of this number is lacking. One approach is to determine statistically how many trials are required to achieve a representative mean. This study compared 4 methods of calculating the number of trials required in a hopping movement to achieve a representative mean. Fifteen males completed 15 trials of a lateral hurdle hop. Range of motion at the trunk, pelvis, hip, knee, and ankle, in addition to peak moments for the latter 3 joints were examined. The number of trials required was computed using a peak intraclass correlation coefficient method, sequential analysis with a bandwidth of acceptable variance in the mean, and a novel method based on the standard error of measurement (SEMind). The number of trials required across all variables ranged from 2 to 12 depending on method, joint, and anatomical plane. The authors advocate the SEMind method as it demonstrated fewer limitations than the other methods. Using the SEMind, the required number of trials for a representative mean during the lateral hurdle hop is 6.
NASA Astrophysics Data System (ADS)
Sperling, Nicholas Niven
The problem of determining the in vivo dosimetry for patients undergoing radiation treatment has been an area of interest since the development of the field. Most methods which have found clinical acceptance work by use of a proxy dosimeter, e.g.: glass rods, using radiophotoluminescence; thermoluminescent dosimeters (TLD), typically CaF or LiF; Metal Oxide Silicon Field Effect Transistor (MOSFET) dosimeters, using threshold voltage shift; Optically Stimulated Luminescent Dosimeters (OSLD), composed of Carbon doped Aluminum Dioxide crystals; RadioChromic film, using leuko-dye polymers; Silicon Diode dosimeters, typically p-type; and ion chambers. More recent methods employ Electronic Portal Image Devices (EPID), or dosimeter arrays, for entrance or exit beam fluence determination. The difficulty with the proxy in vivo dosimetery methods is the requirement that they be placed at the particular location where the dose is to be determined. This precludes measurements across the entire patient volume. These methods are best suited where the dose at a particular location is required. The more recent methods of in vivo dosimetry make use of detector arrays and reconstruction techniques to determine dose throughout the patient volume. One method uses an array of ion chambers located upstream of the patient. This requires a special hardware device and places an additional attenuator in the beam path, which may not be desirable. A final approach is to use the existing EPID, which is part of most modern linear accelerators, to image the patient using the treatment beam. Methods exist to deconvolve the detector function of the EPID using a series of weighted exponentials. Additionally, this method has been extended to determine in vivo dosimetry. The method developed here employs the use of EPID images and an iterative deconvolution algorithm to reconstruct the impinging primary beam fluence on the patient. This primary fluence may then be employed to determine dose through the entire patient volume. The method requires patient specific information, including a CT for deconvolution/dose reconstruction. With the large-scale adoption of Cone Beam CT (CBCT) systems on modern linear accelerators, a treatment time CT is readily available for use in this deconvolution and in dose representation.
Nonideal isentropic gas flow through converging-diverging nozzles
NASA Technical Reports Server (NTRS)
Bober, W.; Chow, W. L.
1990-01-01
A method for treating nonideal gas flows through converging-diverging nozzles is described. The method incorporates the Redlich-Kwong equation of state. The Runge-Kutta method is used to obtain a solution. Numerical results were obtained for methane gas. Typical plots of pressure, temperature, and area ratios as functions of Mach number are given. From the plots, it can be seen that there exists a range of reservoir conditions that require the gas to be treated as nonideal if an accurate solution is to be obtained.
Approximate matching of regular expressions.
Myers, E W; Miller, W
1989-01-01
Given a sequence A and regular expression R, the approximate regular expression matching problem is to find a sequence matching R whose optimal alignment with A is the highest scoring of all such sequences. This paper develops an algorithm to solve the problem in time O(MN), where M and N are the lengths of A and R. Thus, the time requirement is asymptotically no worse than for the simpler problem of aligning two fixed sequences. Our method is superior to an earlier algorithm by Wagner and Seiferas in several ways. First, it treats real-valued costs, in addition to integer costs, with no loss of asymptotic efficiency. Second, it requires only O(N) space to deliver just the score of the best alignment. Finally, its structure permits implementation techniques that make it extremely fast in practice. We extend the method to accommodate gap penalties, as required for typical applications in molecular biology, and further refine it to search for sub-strings of A that strongly align with a sequence in R, as required for typical data base searches. We also show how to deliver an optimal alignment between A and R in only O(N + log M) space using O(MN log M) time. Finally, an O(MN(M + N) + N2log N) time algorithm is presented for alignment scoring schemes where the cost of a gap is an arbitrary increasing function of its length.
NASA Astrophysics Data System (ADS)
Kovalevsky, Louis; Langley, Robin S.; Caro, Stephane
2016-05-01
Due to the high cost of experimental EMI measurements significant attention has been focused on numerical simulation. Classical methods such as Method of Moment or Finite Difference Time Domain are not well suited for this type of problem, as they require a fine discretisation of space and failed to take into account uncertainties. In this paper, the authors show that the Statistical Energy Analysis is well suited for this type of application. The SEA is a statistical approach employed to solve high frequency problems of electromagnetically reverberant cavities at a reduced computational cost. The key aspects of this approach are (i) to consider an ensemble of system that share the same gross parameter, and (ii) to avoid solving Maxwell's equations inside the cavity, using the power balance principle. The output is an estimate of the field magnitude distribution in each cavity. The method is applied on a typical aircraft structure.
Progress in Developing Transfer Functions for Surface Scanning Eddy Current Inspections
NASA Astrophysics Data System (ADS)
Shearer, J.; Heebl, J.; Brausch, J.; Lindgren, E.
2009-03-01
As US Air Force (USAF) aircraft continue to age, additional inspections are required for structural components. The validation of new inspections typically requires a capability demonstration of the method using representative structure with representative damage. To minimize the time and cost required to prepare such samples, Electric Discharge machined (EDM) notches are commonly used to represent fatigue cracks in validation studies. However, the sensitivity to damage typically changes as a function of damage type. This requires a mathematical relationship to be developed between the responses from the two different flaw types to enable the use of EDM notched samples to validate new inspections. This paper reviews progress to develop transfer functions for surface scanning eddy current inspections of aluminum and titanium alloys found in structural aircraft components. Multiple samples with well characterized grown fatigue cracks and master gages with EDM notches, both with a range of flaw sizes, were used to collect flaw signals with USAF field inspection equipment. Analysis of this empirical data was used to develop a transfer function between the response from the EDM notches and grown fatigue cracks.
Shortreed, Susan M.; Moodie, Erica E. M.
2012-01-01
Summary Treatment of schizophrenia is notoriously difficult and typically requires personalized adaption of treatment due to lack of efficacy of treatment, poor adherence, or intolerable side effects. The Clinical Antipsychotic Trials in Intervention Effectiveness (CATIE) Schizophrenia Study is a sequential multiple assignment randomized trial comparing the typical antipsychotic medication, perphenazine, to several newer atypical antipsychotics. This paper describes the marginal structural modeling method for estimating optimal dynamic treatment regimes and applies the approach to the CATIE Schizophrenia Study. Missing data and valid estimation of confidence intervals are also addressed. PMID:23087488
NASA Astrophysics Data System (ADS)
Burress, Jacob; Bethea, Donald; Troub, Brandon
2017-05-01
The accurate measurement of adsorbed gas up to high pressures (˜100 bars) is critical for the development of new materials for adsorbed gas storage. The typical Sievert-type volumetric method introduces accumulating errors that can become large at maximum pressures. Alternatively, gravimetric methods employing microbalances require careful buoyancy corrections. In this paper, we present a combination gravimetric and volumetric system for methane sorption measurements on samples between ˜0.5 and 1 g. The gravimetric method described requires no buoyancy corrections. The tandem use of the gravimetric method allows for a check on the highest uncertainty volumetric measurements. The sources and proper calculation of uncertainties are discussed. Results from methane measurements on activated carbon MSC-30 and metal-organic framework HKUST-1 are compared across methods and within the literature.
Burress, Jacob; Bethea, Donald; Troub, Brandon
2017-05-01
The accurate measurement of adsorbed gas up to high pressures (∼100 bars) is critical for the development of new materials for adsorbed gas storage. The typical Sievert-type volumetric method introduces accumulating errors that can become large at maximum pressures. Alternatively, gravimetric methods employing microbalances require careful buoyancy corrections. In this paper, we present a combination gravimetric and volumetric system for methane sorption measurements on samples between ∼0.5 and 1 g. The gravimetric method described requires no buoyancy corrections. The tandem use of the gravimetric method allows for a check on the highest uncertainty volumetric measurements. The sources and proper calculation of uncertainties are discussed. Results from methane measurements on activated carbon MSC-30 and metal-organic framework HKUST-1 are compared across methods and within the literature.
District heating with geothermally heated culinary water supply systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pitts, D.R.; Schmitt, R.C.
1979-09-01
An initial feasibility study of using existing culinary water supply systems to provide hot water for space heating and air conditioning to a typical residential community is reported. The Phase I study has centered on methods of using low-to-moderate temperature water for heating purposes including institutional barriers, identification and description of a suitable residential commnity water system, evaluation of thermal losses in both the main distribution system and the street mains within the residential district, estimation of size and cost of the pumping station main heat exchanger, sizing of individual residential heat exchangers, determination of pumping and power requirements duemore » to increased flow through the residential area mains, and pumping and power requirements from the street mains through a typical residence. All results of the engineering study of Phase I are encouraging.« less
Intrauterine devices and other forms of contraception: thinking outside the pack.
Allen, Caitlin; Kolehmainen, Christine
2015-05-01
A variety of contraception options are available in addition to traditional combined oral contraceptive pills. Newer long-acting reversible contraceptive (LARC) methods such as intrauterine devices and subcutaneous implants are preferred because they do not depend on patient compliance. They are highly effective and appropriate for most women. Female and male sterilization are other effective but they are irreversible and require counseling to minimize regret. The contraceptive injection, patch, and ring do not require daily administration, but their typical efficacy rates are lower than LARC methods and similar to those for combined oral contraceptive pills. Copyright © 2015 Elsevier Inc. All rights reserved.
Kuikka, E; Eerola, A; Porrasmaa, J; Miettinen, A; Komulainen, J
1999-01-01
Since a patient record is typically a document updated by many users, required to be represented in many different layouts, and transferred from place to place, it is a good candidate to be represented structured and coded using the SGML document standard. The use of the SGML requires that the structure of the document is defined in advance by a Document Type Definition (DTD) and the document follows it. This paper represents a method which derives an SGML DTD by starting from the description of the usage of the patient record in medical care and nursing.
Materials Genome Initiative Element
NASA Technical Reports Server (NTRS)
Vickers, John
2015-01-01
NASA is committed to developing new materials and manufacturing methods that can enable new missions with ever increasing mission demands. Typically, the development and certification of new materials and manufacturing methods in the aerospace industry has required more than 20 years of development time with a costly testing and certification program. To reduce the cost and time to mature these emerging technologies, NASA is developing computational materials tools to improve understanding of the material and guide the certification process.
Developing Technological Initiatives for Youth Participation and Local Community Engagement
ERIC Educational Resources Information Center
Burd, Leo
2010-01-01
Recent advances in technology are transforming our lives, but in many cases they are also limiting the way children are exposed to local communities and physical spaces. Technology can help young people actively connect with their neighborhoods, but doing that requires different methods and tools from the ones typically available in schools,…
High-resolution solution-state NMR of unfractionated plant cell walls
John Ralph; Fachuang Lu; Hoon Kim; Dino Ress; Daniel J. Yelle; Kenneth E. Hammel; Sally A. Ralph; Bernadette Nanayakkara; Armin Wagner; Takuya Akiyama; Paul F. Schatz; Shawn D. Mansfield; Noritsugu Terashima; Wout Boerjan; Bjorn Sundberg; Mattias Hedenstrom
2009-01-01
Detailed structural studies on the plant cell wall have traditionally been difficult. NMR is one of the preeminent structural tools, but obtaining high-resolution solution-state spectra has typically required fractionation and isolation of components of interest. With recent methods for dissolution of, admittedly, finely divided plant cell wall material, the wall can...
Teaching Research and Practice Evaluation Skills to Graduate Social Work Students
ERIC Educational Resources Information Center
Wong, Stephen E.; Vakharia, Sheila P.
2012-01-01
Objective: The authors examined outcomes of a graduate course on evaluating social work practice that required students to use published research, quantitative measures, and single-system designs in a simulated practice evaluation project. Method: Practice evaluation projects from a typical class were analyzed for the number of research references…
Inkjet-Printed Porous Silver Thin Film as a Cathode for a Low-Temperature Solid Oxide Fuel Cell.
Yu, Chen-Chiang; Baek, Jong Dae; Su, Chun-Hao; Fan, Liangdong; Wei, Jun; Liao, Ying-Chih; Su, Pei-Chen
2016-04-27
In this work we report a porous silver thin film cathode that was fabricated by a simple inkjet printing process for low-temperature solid oxide fuel cell applications. The electrochemical performance of the inkjet-printed silver cathode was studied at 300-450 °C and was compared with that of silver cathodes that were fabricated by the typical sputtering method. Inkjet-printed silver cathodes showed lower electrochemical impedance due to their porous structure, which facilitated oxygen gaseous diffusion and oxygen surface adsorption-dissociation reactions. A typical sputtered nanoporous silver cathode became essentially dense after the operation and showed high impedance due to a lack of oxygen supply. The results of long-term fuel cell operation show that the cell with an inkjet-printed cathode had a more stable current output for more than 45 h at 400 °C. A porous silver cathode is required for high fuel cell performance, and the simple inkjet printing technique offers an alternative method of fabrication for such a desirable porous structure with the required thermal-morphological stability.
Comparison of reversible methods for data compression
NASA Astrophysics Data System (ADS)
Heer, Volker K.; Reinfelder, Hans-Erich
1990-07-01
Widely differing methods for data compression described in the ACR-NEMA draft are used in medical imaging. In our contribution we will review various methods briefly and discuss the relevant advantages and disadvantages. In detail we evaluate 1st order DPCM pyramid transformation and S transformation. We compare as coding algorithms both fixed and adaptive Huffman coding and Lempel-Ziv coding. Our comparison is performed on typical medical images from CT MR DSA and DLR (Digital Luminescence Radiography). Apart from the achieved compression factors we take into account CPU time required and main memory requirement both for compression and for decompression. For a realistic comparison we have implemented the mentioned algorithms in the C program language on a MicroVAX II and a SPARC station 1. 2.
Force analysis of magnetic bearings with power-saving controls
NASA Technical Reports Server (NTRS)
Johnson, Dexter; Brown, Gerald V.; Inman, Daniel J.
1992-01-01
Most magnetic bearing control schemes use a bias current with a superimposed control current to linearize the relationship between the control current and the force it delivers. For most operating conditions, the existence of the bias current requires more power than alternative methods that do not use conventional bias. Two such methods are examined which diminish or eliminate bias current. In the typical bias control scheme it is found that for a harmonic control force command into a voltage limited transconductance amplifier, the desired force output is obtained only up to certain combinations of force amplitude and frequency. Above these values, the force amplitude is reduced and a phase lag occurs. The power saving alternative control schemes typically exhibit such deficiencies at even lower command frequencies and amplitudes. To assess the severity of these effects, a time history analysis of the force output is performed for the bias method and the alternative methods. Results of the analysis show that the alternative approaches may be viable. The various control methods examined were mathematically modeled using nondimensionalized variables to facilitate comparison of the various methods.
A computer program for uncertainty analysis integrating regression and Bayesian methods
Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary
2014-01-01
This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.
Shuttle Tethered Aerothermodynamics Research Facility (STARFAC) Instrumentation Requirements
NASA Technical Reports Server (NTRS)
Wood, George M.; Siemers, Paul M.; Carlomagno, Giovanni M.; Hoffman, John
1986-01-01
The instrumentation requirements for the Shuttle Tethered Aerothermodynamic Research Facility (STARFAC) are presented. The typical physical properties of the terrestrial atmosphere are given along with representative atmospheric daytime ion concentrations and the equilibrium and nonequilibrium gas property comparison from a point away from a wall. STARFAC science and engineering measurements are given as are the TSS free stream gas analysis. The potential nonintrusive measurement techniques for hypersonic boundary layer research are outlined along with the quantitative physical measurement methods for aerothermodynamic studies.
Neutron/Gamma-ray discrimination through measures of fit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amiri, Moslem; Prenosil, Vaclav; Cvachovec, Frantisek
2015-07-01
Statistical tests and their underlying measures of fit can be utilized to separate neutron/gamma-ray pulses in a mixed radiation field. In this article, first the application of a sample statistical test is explained. Fit measurement-based methods require true pulse shapes to be used as reference for discrimination. This requirement makes practical implementation of these methods difficult; typically another discrimination approach should be employed to capture samples of neutrons and gamma-rays before running the fit-based technique. In this article, we also propose a technique to eliminate this requirement. These approaches are applied to several sets of mixed neutron and gamma-ray pulsesmore » obtained through different digitizers using stilbene scintillator in order to analyze them and measure their discrimination quality. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheib, J.; Pless, S.; Torcellini, P.
NREL experienced a significant increase in employees and facilities on our 327-acre main campus in Golden, Colorado over the past five years. To support this growth, researchers developed and demonstrated a new building acquisition method that successfully integrates energy efficiency requirements into the design-build requests for proposals and contracts. We piloted this energy performance based design-build process with our first new construction project in 2008. We have since replicated and evolved the process for large office buildings, a smart grid research laboratory, a supercomputer, a parking structure, and a cafeteria. Each project incorporated aggressive efficiency strategies using contractual energy usemore » requirements in the design-build contracts, all on typical construction budgets. We have found that when energy efficiency is a core project requirement as defined at the beginning of a project, innovative design-build teams can integrate the most cost effective and high performance efficiency strategies on typical construction budgets. When the design-build contract includes measurable energy requirements and is set up to incentivize design-build teams to focus on achieving high performance in actual operations, owners can now expect their facilities to perform. As NREL completed the new construction in 2013, we have documented our best practices in training materials and a how-to guide so that other owners and owner's representatives can replicate our successes and learn from our experiences in attaining market viable, world-class energy performance in the built environment.« less
Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems
Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.; ...
2018-04-30
The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less
Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.
The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less
Estimating times of extinction in the fossil record
Marshall, Charles R.
2016-01-01
Because the fossil record is incomplete, the last fossil of a taxon is a biased estimate of its true time of extinction. Numerous methods have been developed in the palaeontology literature for estimating the true time of extinction using ages of fossil specimens. These methods, which typically give a confidence interval for estimating the true time of extinction, differ in the assumptions they make and the nature and amount of data they require. We review the literature on such methods and make some recommendations for future directions. PMID:27122005
Estimating times of extinction in the fossil record.
Wang, Steve C; Marshall, Charles R
2016-04-01
Because the fossil record is incomplete, the last fossil of a taxon is a biased estimate of its true time of extinction. Numerous methods have been developed in the palaeontology literature for estimating the true time of extinction using ages of fossil specimens. These methods, which typically give a confidence interval for estimating the true time of extinction, differ in the assumptions they make and the nature and amount of data they require. We review the literature on such methods and make some recommendations for future directions. © 2016 The Author(s).
Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure
NASA Technical Reports Server (NTRS)
Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark
2009-01-01
High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.
Bayesian Factor Analysis When Only a Sample Covariance Matrix Is Available
ERIC Educational Resources Information Center
Hayashi, Kentaro; Arav, Marina
2006-01-01
In traditional factor analysis, the variance-covariance matrix or the correlation matrix has often been a form of inputting data. In contrast, in Bayesian factor analysis, the entire data set is typically required to compute the posterior estimates, such as Bayes factor loadings and Bayes unique variances. We propose a simple method for computing…
ERIC Educational Resources Information Center
van den Broek, Ellen G. C.; Janssen, C. G. C.; van Ramshorst, T.; Deen, L.
2006-01-01
Background: The prevalence of visual impairments in people with severe and profound multiple disabilities (SPMD) is the subject of considerable debate and is difficult to assess. Methods: In a typical Dutch care organization, all clients with SPMD (n = 76) participated in the study and specific instruments adapted to these clients (requiring a…
ERIC Educational Resources Information Center
Jurbergs, Nichole; Palcic, Jennette L.; Kelley, Mary L.
2010-01-01
Daily Behavior Report Cards (DBRC), which typically require teachers to evaluate students' daily behavior and parents to provide contingent consequences, are an effective and acceptable method for improving children's classroom behavior. The current study evaluated whether parent involvement is an essential treatment component or whether teacher…
ERIC Educational Resources Information Center
Kristian, Kathleen E.; Friedbauer, Scott; Kabashi, Donika; Ferencz, Kristen M.; Barajas, Jennifer C.; O'Brien, Kelly
2015-01-01
Analysis of mercury in fish is an interesting problem with the potential to motivate students in chemistry laboratory courses. The recommended method for mercury analysis in fish is cold vapor atomic absorption spectroscopy (CVAAS), which requires homogeneous analyte solutions, typically prepared by acid digestion. Previously published digestion…
ERIC Educational Resources Information Center
Iarocci, Grace; Yager, Jodi; Elfers, Theo
2007-01-01
Social competence is a complex human behaviour that is likely to involve a system of genes that interacts with a myriad of environmental risk and protective factors. The search for its genetic and environmental origins and influences is equally complex and will require a multidimensional conceptualization and multiple methods and levels of…
ERIC Educational Resources Information Center
Searight, H. Russell; Ratwik, Susan; Smith, Todd
2010-01-01
Many undergraduate programs require students to complete an independent research project in their major field prior to graduation. These projects are typically described as opportunities for integration of coursework and a direct application of the methods of inquiry specific to a particular discipline. Evaluations of curricular projects have…
Effectiveness of a College-Level Self-Management Course on Successful Behavior Change
ERIC Educational Resources Information Center
Choi, Jean H.; Chung, Kyong-Mee
2012-01-01
Studies have shown that college-level self-management (SM) courses, which typically require students to complete an individual project as part of the course, can be an effective method for promoting successful self-change (i.e., targeted behavioral change). However, only a handful of studies have focused on and investigated the intensity of the SM…
The issues of weed infestation with environmentally hazardous plants and methods of their control
NASA Astrophysics Data System (ADS)
Bogdanov, V. L.; Posternak, T. S.; Pasko, O. A.; Kovyazin, V. F.
2016-09-01
The authors analyze expansion of segetal and ruderal vegetation on agricultural lands in Leningrad and Tomsk oblasts, typical for the European and Asian parts of Russia. The spreading conditions, composition of species, biological features and ecological requirements of the most aggressive species are identified. Some effective ways of weed control are suggested.
Pickl, Karin E; Adamek, Viktor; Gorges, Roland; Sinner, Frank M
2011-07-15
Due to increased regulatory requirements, the interaction of active pharmaceutical ingredients with various surfaces and solutions during production and storage is gaining interest in the pharmaceutical research field, in particular with respect to development of new formulations, new packaging material and the evaluation of cleaning processes. Experimental adsorption/absorption studies as well as the study of cleaning processes require sophisticated analytical methods with high sensitivity for the drug of interest. In the case of 2,6-diisopropylphenol - a small lipophilic drug which is typically formulated as lipid emulsion for intravenous injection - a highly sensitive method in the concentration range of μg/l suitable to be applied to a variety of different sample matrices including lipid emulsions is needed. We hereby present a headspace-solid phase microextraction (HS-SPME) approach as a simple cleanup procedure for sensitive 2,6-diisopropylphenol quantification from diverse matrices choosing a lipid emulsion as the most challenging matrix with regard to complexity. By combining the simple and straight forward HS-SPME sample pretreatment with an optimized GC-MS quantification method a robust and sensitive method for 2,6-diisopropylphenol was developed. This method shows excellent sensitivity in the low μg/l concentration range (5-200μg/l), good accuracy (94.8-98.8%) and precision (intraday-precision 0.1-9.2%, inter-day precision 2.0-7.7%). The method can be easily adapted to other, less complex, matrices such as water or swab extracts. Hence, the presented method holds the potential to serve as a single and simple analytical procedure for 2,6-diisopropylphenol analysis in various types of samples such as required in, e.g. adsorption/absorption studies which typically deal with a variety of different surfaces (steel, plastic, glass, etc.) and solutions/matrices including lipid emulsions. Copyright © 2011 Elsevier B.V. All rights reserved.
Aeroelastic Calculations Using CFD for a Typical Business Jet Model
NASA Technical Reports Server (NTRS)
Gibbons, Michael D.
1996-01-01
Two time-accurate Computational Fluid Dynamics (CFD) codes were used to compute several flutter points for a typical business jet model. The model consisted of a rigid fuselage with a flexible semispan wing and was tested in the Transonic Dynamics Tunnel at NASA Langley Research Center where experimental flutter data were obtained from M(sub infinity) = 0.628 to M(sub infinity) = 0.888. The computational results were computed using CFD codes based on the inviscid TSD equation (CAP-TSD) and the Euler/Navier-Stokes equations (CFL3D-AE). Comparisons are made between analytical results and with experiment where appropriate. The results presented here show that the Navier-Stokes method is required near the transonic dip due to the strong viscous effects while the TSD and Euler methods used here provide good results at the lower Mach numbers.
Quantifying induced effects of subsurface renewable energy storage
NASA Astrophysics Data System (ADS)
Bauer, Sebastian; Beyer, Christof; Pfeiffer, Tilmann; Boockmeyer, Anke; Popp, Steffi; Delfs, Jens-Olaf; Wang, Bo; Li, Dedong; Dethlefsen, Frank; Dahmke, Andreas
2015-04-01
New methods and technologies for energy storage are required for the transition to renewable energy sources. Subsurface energy storage systems such as salt caverns or porous formations offer the possibility of hosting large amounts of energy or substance. When employing these systems, an adequate system and process understanding is required in order to assess the feasibility of the individual storage option at the respective site and to predict the complex and interacting effects induced. This understanding is the basis for assessing the potential as well as the risks connected with a sustainable usage of these storage options, especially when considering possible mutual influences. For achieving this aim, in this work synthetic scenarios for the use of the geological underground as an energy storage system are developed and parameterized. The scenarios are designed to represent typical conditions in North Germany. The types of subsurface use investigated here include gas storage and heat storage in porous formations. The scenarios are numerically simulated and interpreted with regard to risk analysis and effect forecasting. For this, the numerical simulators Eclipse and OpenGeoSys are used. The latter is enhanced to include the required coupled hydraulic, thermal, geomechanical and geochemical processes. Using the simulated and interpreted scenarios, the induced effects are quantified individually and monitoring concepts for observing these effects are derived. This presentation will detail the general investigation concept used and analyze the parameter availability for this type of model applications. Then the process implementation and numerical methods required and applied for simulating the induced effects of subsurface storage are detailed and explained. Application examples show the developed methods and quantify induced effects and storage sizes for the typical settings parameterized. This work is part of the ANGUS+ project, funded by the German Ministry of Education and Research (BMBF).
Tests of cosmic ray radiography for power industry applications
NASA Astrophysics Data System (ADS)
Durham, J. M.; Guardincerri, E.; Morris, C. L.; Bacon, J.; Fabritius, J.; Fellows, S.; Poulson, D.; Plaud-Ramos, K.; Renshaw, J.
2015-06-01
In this report, we assess muon multiple scattering tomography as a non-destructive inspection technique in several typical areas of interest to the nuclear power industry, including monitoring concrete degradation, gate valve conditions, and pipe wall thickness. This work is motivated by the need for imaging methods that do not require the licensing, training, and safety controls of x-rays, and by the need to be able to penetrate considerable overburden to examine internal details of components that are otherwise inaccessible, with minimum impact on industrial operations. In some scenarios, we find that muon tomography may be an attractive alternative to more typical measurements.
Tests of cosmic ray radiography for power industry applications
Durham, J. M.; Guardincerri, E.; Morris, C. L.; ...
2015-06-30
In this report, we assess muon multiple scattering tomography as a non-destructive inspection technique in several typical areas of interest to the nuclear power industry, including monitoring concrete degradation, gate valve conditions, and pipe wall thickness. This work is motivated by the need for imaging methods that do not require the licensing, training, and safety controls of x-rays, and by the need to be able to penetrate considerable overburden to examine internal details of components that are otherwise inaccessible, with minimum impact on industrial operations. In some instances, we find that muon tomography may be an attractive alternative to moremore » typical measurements.« less
Non-steady state modelling of wheel-rail contact problem
NASA Astrophysics Data System (ADS)
Guiral, A.; Alonso, A.; Baeza, L.; Giménez, J. G.
2013-01-01
Among all the algorithms to solve the wheel-rail contact problem, Kalker's FastSim has become the most useful computation tool since it combines a low computational cost and enough precision for most of the typical railway dynamics problems. However, some types of dynamic problems require the use of a non-steady state analysis. Alonso and Giménez developed a non-stationary method based on FastSim, which provides both, sufficiently accurate results and a low computational cost. However, it presents some limitations; the method is developed for one time-dependent creepage and its accuracy for varying normal forces has not been checked. This article presents the required changes in order to deal with both problems and compares its results with those given by Kalker's Variational Method for rolling contact.
Zonal methods for the parallel execution of range-limited N-body simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, Kevin J.; Dror, Ron O.; Shaw, David E.
2007-01-20
Particle simulations in fields ranging from biochemistry to astrophysics require the evaluation of interactions between all pairs of particles separated by less than some fixed interaction radius. The applicability of such simulations is often limited by the time required for calculation, but the use of massive parallelism to accelerate these computations is typically limited by inter-processor communication requirements. Recently, Snir [M. Snir, A note on N-body computations with cutoffs, Theor. Comput. Syst. 37 (2004) 295-318] and Shaw [D.E. Shaw, A fast, scalable method for the parallel evaluation of distance-limited pairwise particle interactions, J. Comput. Chem. 26 (2005) 1318-1328] independently introducedmore » two distinct methods that offer asymptotic reductions in the amount of data transferred between processors. In the present paper, we show that these schemes represent special cases of a more general class of methods, and introduce several new algorithms in this class that offer practical advantages over all previously described methods for a wide range of problem parameters. We also show that several of these algorithms approach an approximate lower bound on inter-processor data transfer.« less
A fast calibration method for 3-D tracking of ultrasound images using a spatial localizer.
Pagoulatos, N; Haynor, D R; Kim, Y
2001-09-01
We have developed a fast calibration method for computing the position and orientation of 2-D ultrasound (US) images in 3-D space where a position sensor is mounted on the US probe. This calibration is required in the fields of 3-D ultrasound and registration of ultrasound with other imaging modalities. Most of the existing calibration methods require a complex and tedious experimental procedure. Our method is simple and it is based on a custom-built phantom. Thirty N-fiducials (markers in the shape of the letter "N") embedded in the phantom provide the basis for our calibration procedure. We calibrated a 3.5-MHz sector phased-array probe with a magnetic position sensor, and we studied the accuracy and precision of our method. A typical calibration procedure requires approximately 2 min. We conclude that we can achieve accurate and precise calibration using a single US image, provided that a large number (approximately ten) of N-fiducials are captured within the US image, enabling a representative sampling of the imaging plane.
Characterization of Developer Application Methods Used in Fluorescent Penetrant Inspection
NASA Astrophysics Data System (ADS)
Brasche, L. J. H.; Lopez, R.; Eisenmann, D.
2006-03-01
Fluorescent penetrant inspection (FPI) is the most widely used inspection method for aviation components seeing use for production as well as an inservice inspection applications. FPI is a multiple step process requiring attention to the process parameters for each step in order to enable a successful inspection. A multiyear program is underway to evaluate the most important factors affecting the performance of FPI, to determine whether existing industry specifications adequately address control of the process parameters, and to provide the needed engineering data to the public domain. The final step prior to the inspection is the application of developer with typical aviation inspections involving the use of dry powder (form d) usually applied using either a pressure wand or dust storm chamber. Results from several typical dust storm chambers and wand applications have shown less than optimal performance. Measurements of indication brightness and recording of the UVA image, and in some cases, formal probability of detection (POD) studies were used to assess the developer application methods. Key conclusions and initial recommendations are provided.
Gaussian Processes for Data-Efficient Learning in Robotics and Control.
Deisenroth, Marc Peter; Fox, Dieter; Rasmussen, Carl Edward
2015-02-01
Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise required. However, autonomous reinforcement learning (RL) approaches typically require many interactions with the system to learn controllers, which is a practical limitation in real systems, such as robots, where many interactions can be impractical and time consuming. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, realistic simulators, pre-shaped policies, or specific knowledge about the underlying dynamics. In this paper, we follow a different approach and speed up learning by extracting more information from data. In particular, we learn a probabilistic, non-parametric Gaussian process transition model of the system. By explicitly incorporating model uncertainty into long-term planning and controller learning our approach reduces the effects of model errors, a key problem in model-based learning. Compared to state-of-the art RL our model-based policy search method achieves an unprecedented speed of learning. We demonstrate its applicability to autonomous learning in real robot and control tasks.
Potential for yield improvement in combined rip-first and crosscut-first rough mill processing
Ed Thomas; Urs Buehlmann
2016-01-01
Traditionally, lumber cutting systems in rough mills have either first ripped lumber into wide strips and then crosscut the resulting strips into component lengths (rip-first), or first crosscut the lumber into component lengths, then ripped the segments to the required widths (crosscut-first). Each method has its advantages and disadvantages. Crosscut-first typically...
ERIC Educational Resources Information Center
Pereira, Valerie J.; Sell, Debbie; Tuomainen, Jyrki
2013-01-01
Background: Abnormal facial growth is a well-known sequelae of cleft lip and palate (CLP) resulting in maxillary retrusion and a class III malocclusion. In 10-50% of cases, surgical correction involving advancement of the maxilla typically by osteotomy methods is required and normally undertaken in adolescence when facial growth is complete.…
24 CFR 972.127 - Standards for determining whether a property is viable in the long term.
Code of Federal Regulations, 2010 CFR
2010-04-01
... must not exceed the Section 8 cost under the method contained in the Appendix to this part, even if the... housing in the community (typically family). (c) A greater income mix can be achieved. (1) Measures generally will be required to broaden the range of resident incomes over time to include a significant mix...
LPTA Versus Tradeoff: How Procurement Methods Can Impact Contract Performance
2015-06-01
and Technology BBP Better Buying Power BPA Blanket Purchase Agreement CAR Contract Action Report COR Contracting Officer’s...Blanket Purchase Agreements ( BPAs ), which utilize streamlined contracting in the form of orders to award requirements faster. Under IDIQs, GSA vehicles...and BPA agreements the rates are typically pre-negotiated with the set of vendors, leaving little necessity for negotiation and tradeoff tactics
Soil Stabilization for Roadways and Airfields
1987-07-01
the possibility of accidents and minimize health hazards. Al 1 Occupational Safety and Health Act requirements shall be observed. 8. Method of...v ELS Section Title Page G. Selection of Asphalt Type and Asphalt Content ........ . ... .. 86 H. Safety Precautions, Limitations of Use and...for Lime Fly Ash-Aggregate Base/Subbase Courses ...... ............. ?16 SECTION III - Typical Specification for Road - Mixed Asphalt fyr BasP and
2008-03-01
1 . Maintenance Practices Influence Service Life .......................................................... 11 2 . Expectations or Standards May...BRB, 1991, p. 1 - 2 ) Additionally, public sector organizations typically have a larger inventory of facilities to maintain, making asset management...questions were answered. 1 . What are the long term causes and effects of under-funding the maintenance of facilities? 2 . What methods currently
Visual and tactile interfaces for bi-directional human robot communication
NASA Astrophysics Data System (ADS)
Barber, Daniel; Lackey, Stephanie; Reinerman-Jones, Lauren; Hudson, Irwin
2013-05-01
Seamless integration of unmanned and systems and Soldiers in the operational environment requires robust communication capabilities. Multi-Modal Communication (MMC) facilitates achieving this goal due to redundancy and levels of communication superior to single mode interaction using auditory, visual, and tactile modalities. Visual signaling using arm and hand gestures is a natural method of communication between people. Visual signals standardized within the U.S. Army Field Manual and in use by Soldiers provide a foundation for developing gestures for human to robot communication. Emerging technologies using Inertial Measurement Units (IMU) enable classification of arm and hand gestures for communication with a robot without the requirement of line-of-sight needed by computer vision techniques. These devices improve the robustness of interpreting gestures in noisy environments and are capable of classifying signals relevant to operational tasks. Closing the communication loop between Soldiers and robots necessitates them having the ability to return equivalent messages. Existing visual signals from robots to humans typically require highly anthropomorphic features not present on military vehicles. Tactile displays tap into an unused modality for robot to human communication. Typically used for hands-free navigation and cueing, existing tactile display technologies are used to deliver equivalent visual signals from the U.S. Army Field Manual. This paper describes ongoing research to collaboratively develop tactile communication methods with Soldiers, measure classification accuracy of visual signal interfaces, and provides an integration example including two robotic platforms.
NASA Astrophysics Data System (ADS)
Vicuña, Cristián Molina; Höweler, Christoph
2017-12-01
The use of AE in machine failure diagnosis has increased over the last years. Most AE-based failure diagnosis strategies use digital signal processing and thus require the sampling of AE signals. High sampling rates are required for this purpose (e.g. 2 MHz or higher), leading to streams of large amounts of data. This situation is aggravated if fine resolution and/or multiple sensors are required. These facts combine to produce bulky data, typically in the range of GBytes, for which sufficient storage space and efficient signal processing algorithms are required. This situation probably explains why, in practice, AE-based methods consist mostly in the calculation of scalar quantities such as RMS and Kurtosis, and the analysis of their evolution in time. While the scalar-based approach offers the advantage of maximum data reduction; it has the disadvantage that most part of the information contained in the raw AE signal is lost unrecoverably. This work presents a method offering large data reduction, while keeping the most important information conveyed by the raw AE signal, useful for failure detection and diagnosis. The proposed method consist in the construction of a synthetic, unevenly sampled signal which envelopes the AE bursts present on the raw AE signal in a triangular shape. The constructed signal - which we call TriSignal - also permits the estimation of most scalar quantities typically used for failure detection. But more importantly, it contains the information of the time of occurrence of the bursts, which is key for failure diagnosis. Lomb-Scargle normalized periodogram is used to construct the TriSignal spectrum, which reveals the frequency content of the TriSignal and provides the same information as the classic AE envelope. The paper includes application examples in planetary gearbox and low-speed rolling element bearing.
Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin
2016-04-01
Geologists' interpretations about the Earth typically involve distinct rock units with contacts (interfaces) between them. In contrast, standard minimum-structure geophysical inversions are performed on meshes of space-filling cells (typically prisms or tetrahedra) and recover smoothly varying physical property distributions that are inconsistent with typical geological interpretations. There are several approaches through which mesh-based minimum-structure geophysical inversion can help recover models with some of the desired characteristics. However, a more effective strategy may be to consider two fundamentally different types of inversions: lithological and surface geometry inversions. A major advantage of these two inversion approaches is that joint inversion of multiple types of geophysical data is greatly simplified. In a lithological inversion, the subsurface is discretized into a mesh and each cell contains a particular rock type. A lithological model must be translated to a physical property model before geophysical data simulation. Each lithology may map to discrete property values or there may be some a priori probability density function associated with the mapping. Through this mapping, lithological inverse problems limit the parameter domain and consequently reduce the non-uniqueness from that presented by standard mesh-based inversions that allow physical property values on continuous ranges. Furthermore, joint inversion is greatly simplified because no additional mathematical coupling measure is required in the objective function to link multiple physical property models. In a surface geometry inversion, the model comprises wireframe surfaces representing contacts between rock units. This parameterization is then fully consistent with Earth models built by geologists, which in 3D typically comprise wireframe contact surfaces of tessellated triangles. As for the lithological case, the physical properties of the units lying between the contact surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.
Age-Related Brain Activation Changes during Rule Repetition in Word-Matching.
Methqal, Ikram; Pinsard, Basile; Amiri, Mahnoush; Wilson, Maximiliano A; Monchi, Oury; Provost, Jean-Sebastien; Joanette, Yves
2017-01-01
Objective: The purpose of this study was to explore the age-related brain activation changes during a word-matching semantic-category-based task, which required either repeating or changing a semantic rule to be applied. In order to do so, a word-semantic rule-based task was adapted from the Wisconsin Sorting Card Test, involving the repeated feedback-driven selection of given pairs of words based on semantic category-based criteria. Method: Forty healthy adults (20 younger and 20 older) performed a word-matching task while undergoing a fMRI scan in which they were required to pair a target word with another word from a group of three words. The required pairing is based on three word-pair semantic rules which correspond to different levels of semantic control demands: functional relatedness, moderately typical-relatedness (which were considered as low control demands), and atypical-relatedness (high control demands). The sorting period consisted of a continuous execution of the same sorting rule and an inferred trial-by-trial feedback was given. Results: Behavioral performance revealed increases in response times and decreases of correct responses according to the level of semantic control demands (functional vs. typical vs. atypical) for both age groups (younger and older) reflecting graded differences in the repetition of the application of a given semantic rule. Neuroimaging findings of significant brain activation showed two main results: (1) Greater task-related activation changes for the repetition of the application of atypical rules relative to typical and functional rules, and (2) Changes (older > younger) in the inferior prefrontal regions for functional rules and more extensive and bilateral activations for typical and atypical rules. Regarding the inter-semantic rules comparison, only task-related activation differences were observed for functional > typical (e.g., inferior parietal and temporal regions bilaterally) and atypical > typical (e.g., prefrontal, inferior parietal, posterior temporal, and subcortical regions). Conclusion: These results suggest that healthy cognitive aging relies on the adaptive changes of inferior prefrontal resources involved in the repetitive execution of semantic rules, thus reflecting graded differences in support of task demands.
Avionics System Architecture for the NASA Orion Vehicle
NASA Technical Reports Server (NTRS)
Baggerman, Clint; McCabe, Mary; Verma, Dinesh
2009-01-01
It has been 30 years since the National Aeronautics and Space Administration (NASA) last developed a crewed spacecraft capable of launch, on-orbit operations, and landing. During that time, aerospace avionics technologies have greatly advanced in capability, and these technologies have enabled integrated avionics architectures for aerospace applications. The inception of NASA s Orion Crew Exploration Vehicle (CEV) spacecraft offers the opportunity to leverage the latest integrated avionics technologies into crewed space vehicle architecture. The outstanding question is to what extent to implement these advances in avionics while still meeting the unique crewed spaceflight requirements for safety, reliability and maintainability. Historically, aircraft and spacecraft have very similar avionics requirements. Both aircraft and spacecraft must have high reliability. They also must have as much computing power as possible and provide low latency between user control and effecter response while minimizing weight, volume, and power. However, there are several key differences between aircraft and spacecraft avionics. Typically, the overall spacecraft operational time is much shorter than aircraft operation time, but the typical mission time (and hence, the time between preventive maintenance) is longer for a spacecraft than an aircraft. Also, the radiation environment is typically more severe for spacecraft than aircraft. A "loss of mission" scenario (i.e. - the mission is not a success, but there are no casualties) arguably has a greater impact on a multi-million dollar spaceflight mission than a typical commercial flight. Such differences need to be weighted when determining if an aircraft-like integrated modular avionics (IMA) system is suitable for a crewed spacecraft. This paper will explore the preliminary design process of the Orion vehicle avionics system by first identifying the Orion driving requirements and the difference between Orion requirements and those of other previous crewed spacecraft avionics systems. Common systems engineering methods will be used to evaluate the value propositions, or the factors that weight most heavily in design consideration, of Orion and other aerospace systems. Then, the current Orion avionics architecture will be presented and evaluated.
Practical Use of Computationally Frugal Model Analysis Methods
Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; ...
2015-03-21
Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugalmore » methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts« less
Alternative Methods for Assessing Contaminant Transport from the Vadose Zone to Indoor Air
NASA Astrophysics Data System (ADS)
Baylor, K. J.; Lee, A.; Reddy, P.; Plate, M.
2010-12-01
Vapor intrusion, which is the transport of contaminant vapors from groundwater and the vadose zone to indoor air, has emerged as a significant human health risk near hazardous waste sites. Volatile organic compounds (VOCs) such as trichloroethylene (TCE) and tetrachloroethylene (PCE) can volatilize from groundwater and from residual sources in the vadose zone and enter homes and commercial buildings through cracks in the slab, plumbing conduits, or other preferential pathways. Assessment of the vapor intrusion pathway typically requires collection of groundwater, soil gas, and indoor air samples, a process which can be expensive and time-consuming. We evaluated three alternative vapor intrusion assessment methods, including 1) use of radon as a surrogate for vapor intrusion, 2) use of pressure differential measurements between indoor/outdoor and indoor/subslab to assess the potential for vapor intrusion, and 3) use of passive, longer-duration sorbent methods to measure indoor air VOC concentrations. The primary test site, located approximately 30 miles south of San Francisco, was selected due to the presence of TCE (10 - 300 ug/L) in shallow groundwater (5 to 10 feet bgs). At this test site, we found that radon was not a suitable surrogate to asses vapor intrusion and that pressure differential measurements are challenging to implement and equipment-intensive. More significantly, we found that the passive, longer-duration sorbent methods are easy to deploy and compared well quantitatively with standard indoor air sampling methods. The sorbent technique is less than half the cost of typical indoor air methods, and also provides a longer duration sample, typically 3 to 14 days rather than 8 to 24 hours for standard methods. The passive sorbent methods can be a reliable, cost-effective, and easy way to sample for TCE, PCE and other VOCs as part of a vapor intrusion investigation.
Four-body trajectory optimization
NASA Technical Reports Server (NTRS)
Pu, C. L.; Edelbaum, T. N.
1973-01-01
A collection of typical three-body trajectories from the L1 libration point on the sun-earth line to the earth is presented. These trajectories in the sun-earth system are grouped into four distinct families which differ in transfer time and delta V requirements. Curves showing the variations of delta V with respect to transfer time, and typical two and three-impulse primer vector histories, are included. The development of a four-body trajectory optimization program to compute fuel optimal trajectories between the earth and a point in the sun-earth-moon system are also discussed. Methods for generating fuel optimal two-impulse trajectories which originate at the earth or a point in space, and fuel optimal three-impulse trajectories between two points in space, are presented. A brief qualitative comparison of these methods is given. An example of a four-body two-impulse transfer from the Li libration point to the earth is included.
Structural Code Considerations for Solar Rooftop Installations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dwyer, Stephen F.; Dwyer, Brian P.; Sanchez, Alfred
2014-12-01
Residential rooftop solar panel installations are limited in part by the high cost of structural related code requirements for field installation. Permitting solar installations is difficult because there is a belief among residential permitting authorities that typical residential rooftops may be structurally inadequate to support the additional load associated with a photovoltaic (PV) solar installation. Typical engineering methods utilized to calculate stresses on a roof structure involve simplifying assumptions that render a complex non-linear structure to a basic determinate beam. This method of analysis neglects the composite action of the entire roof structure, yielding a conservative analysis based on amore » rafter or top chord of a truss. Consequently, the analysis can result in an overly conservative structural analysis. A literature review was conducted to gain a better understanding of the conservative nature of the regulations and codes governing residential construction and the associated structural system calculations.« less
A novel method for creating custom shaped ballistic gelatin trainers using plaster molds.
Doctor, Michael; Katz, Anne; McNamara, Shannon O; Leifer, Jessica H; Bambrick-Santoyo, Gabriela; Saul, Turandot; Rose, Keith M
2018-03-01
Simulation based procedural training is an effective and frequently used method for teaching vascular access techniques which often require commercial trainers. These can be prohibitively expensive, which allows for homemade trainers made of gelatin to be a more cost-effective and attractive option. Previously described trainers are often rectangular with a flat surface that is dissimilar to human anatomy. We describe a novel method to create a more anatomically realistic trainer using ballistic gelatin, household items, and supplies commonly found in an emergency department such as the plaster wrap typically used to make splints.
Robertson, Scott
2014-11-01
Analog gravity experiments make feasible the realization of black hole space-times in a laboratory setting and the observational verification of Hawking radiation. Since such analog systems are typically dominated by dispersion, efficient techniques for calculating the predicted Hawking spectrum in the presence of strong dispersion are required. In the preceding paper, an integral method in Fourier space is proposed for stationary 1+1-dimensional backgrounds which are asymptotically symmetric. Here, this method is generalized to backgrounds which are different in the asymptotic regions to the left and right of the scattering region.
High strength air-dried aerogels
Coronado, Paul R.; Satcher, Jr., Joe H.
2012-11-06
A method for the preparation of high strength air-dried organic aerogels. The method involves the sol-gel polymerization of organic gel precursors, such as resorcinol with formaldehyde (RF) in aqueous solvents with R/C ratios greater than about 1000 and R/F ratios less than about 1:2.1. Using a procedure analogous to the preparation of resorcinol-formaldehyde (RF) aerogels, this approach generates wet gels that can be air dried at ambient temperatures and pressures. The method significantly reduces the time and/or energy required to produce a dried aerogel compared to conventional methods using either supercritical solvent extraction. The air dried gel exhibits typically less than 5% shrinkage.
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
Rapid Analysis of Copper Ore in Pre-Smelter Head Flow Slurry by Portable X-ray Fluorescence.
Burnett, Brandon J; Lawrence, Neil J; Abourahma, Jehad N; Walker, Edward B
2016-05-01
Copper laden ore is often concentrated using flotation. Before the head flow slurry can be smelted, it is important to know the concentration of copper and contaminants. The concentration of copper and other elements fluctuate significantly in the head flow, often requiring modification of the concentrations in the slurry prior to smelting. A rapid, real-time analytical method is needed to support on-site optimization of the smelter feedstock. A portable, handheld X-ray fluorescence spectrometer was utilized to determine the copper concentration in a head flow suspension at the slurry origin. The method requires only seconds and is reliable for copper concentrations of 2.0-25%, typically encountered in such slurries. © The Author(s) 2016.
Improvements in surface singularity analysis and design methods. [applicable to airfoils
NASA Technical Reports Server (NTRS)
Bristow, D. R.
1979-01-01
The coupling of the combined source vortex distribution of Green's potential flow function with contemporary numerical techniques is shown to provide accurate, efficient, and stable solutions to subsonic inviscid analysis and design problems for multi-element airfoils. The analysis problem is solved by direct calculation of the surface singularity distribution required to satisfy the flow tangency boundary condition. The design or inverse problem is solved by an iteration process. In this process, the geometry and the associated pressure distribution are iterated until the pressure distribution most nearly corresponding to the prescribed design distribution is obtained. Typically, five iteration cycles are required for convergence. A description of the analysis and design method is presented, along with supporting examples.
Recycling microcavity optical biosensors.
Hunt, Heather K; Armani, Andrea M
2011-04-01
Optical biosensors have tremendous potential for commercial applications in medical diagnostics, environmental monitoring, and food safety evaluation. In these applications, sensor reuse is desirable to reduce costs. To achieve this, harsh, wet chemistry treatments are required to remove surface chemistry from the sensor, typically resulting in reduced sensor performance and increased noise due to recognition moiety and optical transducer degradation. In the present work, we suggest an alternative, dry-chemistry method, based on O2 plasma treatment. This approach is compatible with typical fabrication of substrate-based optical transducers. This treatment completely removes the recognition moiety, allowing the transducer surface to be refreshed with new recognition elements and thus enabling the sensor to be recycled.
Schulze, H Georg; Turner, Robin F B
2014-01-01
Charge-coupled device detectors are vulnerable to cosmic rays that can contaminate Raman spectra with positive going spikes. Because spikes can adversely affect spectral processing and data analyses, they must be removed. Although both hardware-based and software-based spike removal methods exist, they typically require parameter and threshold specification dependent on well-considered user input. Here, we present a fully automated spike removal algorithm that proceeds without requiring user input. It is minimally dependent on sample attributes, and those that are required (e.g., standard deviation of spectral noise) can be determined with other fully automated procedures. At the core of the method is the identification and location of spikes with coincident second derivatives along both the spectral and spatiotemporal dimensions of two-dimensional datasets. The method can be applied to spectra that are relatively inhomogeneous because it provides fairly effective and selective targeting of spikes resulting in minimal distortion of spectra. Relatively effective spike removal obtained with full automation could provide substantial benefits to users where large numbers of spectra must be processed.
Minimal residual method provides optimal regularization parameter for diffuse optical tomography
NASA Astrophysics Data System (ADS)
Jagannath, Ravi Prasad K.; Yalavarthy, Phaneendra K.
2012-10-01
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.
Minimal residual method provides optimal regularization parameter for diffuse optical tomography.
Jagannath, Ravi Prasad K; Yalavarthy, Phaneendra K
2012-10-01
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.
1995-01-01
This guide describes the input data required for using ECAP2D (Euler Cascade Aeroelastic Program-Two Dimensional). ECAP2D can be used for steady or unsteady aerodynamic and aeroelastic analysis of two dimensional cascades. Euler equations are used to obtain aerodynamic forces. The structural dynamic equations are written for a rigid typical section undergoing pitching (torsion) and plunging (bending) motion. The solution methods include harmonic oscillation method, influence coefficient method, pulse response method, and time integration method. For harmonic oscillation method, example inputs and outputs are provided for pitching motion and plunging motion. For the rest of the methods, input and output for pitching motion only are given.
NASA Astrophysics Data System (ADS)
Izmaylov, R.; Lebedev, A.
2015-08-01
Centrifugal compressors are complex energy equipment. Automotive control and protection system should meet the requirements: of operation reliability and durability. In turbocompressors there are at least two dangerous areas: surge and rotating stall. Antisurge protecting systems usually use parametric or feature methods. As a rule industrial system are parametric. The main disadvantages of anti-surge parametric systems are difficulties in mass flow measurements in natural gas pipeline compressor. The principal idea of feature method is based on the experimental fact: as a rule just before the onset of surge rotating or precursor stall established in compressor. In this case the problem consists in detecting of unsteady pressure or velocity fluctuations characteristic signals. Wavelet analysis is the best method for detecting onset of rotating stall in spite of high level of spurious signals (rotating wakes, turbulence, etc.). This method is compatible with state of the art DSP systems of industrial control. Examples of wavelet analysis application for detecting onset of rotating stall in typical stages centrifugal compressor are presented. Experimental investigations include unsteady pressure measurement and sophisticated data acquisition system. Wavelet transforms used biorthogonal wavelets in Mathlab systems.
Nogal, Paweł; Lewiński, Andrzej
2008-01-01
Anorexia nervosa is an eating disorder characterized by conscious restriction of food intake, which causes numerous metabolic and hormonal disorders. Knowledge of these changes is important due to growing morbidity and mortality of anorexia. Treatment is difficult and requires cooperation of a group of specialists, including an endocrinologist. The authors presented a clinical picture, view of etiopathogenesis and typical disorders found in patients with this illness. Furthermore, treatment methods were also discussed.
Michelle A. Jusino; Daniel Lindner; John K. Cianchetti; Adam T. Grisé; Nicholas J. Brazee; Jeffrey R. Walters
2014-01-01
Relationships among cavity-nesting birds, trees, and wood decay fungi pose interesting management challenges and research questions in many systems. Ornithologists need to understand the relationships between cavity-nesting birds and fungi in order to understand the habitat requirements of these birds. Typically, researchers rely on fruiting body surveys to identify...
ERIC Educational Resources Information Center
Moses, Tim
2006-01-01
Population invariance is an important requirement of test equating. An equating function is said to be population invariant when the choice of (sub)population used to compute the equating function does not matter. In recent studies, the extent to which equating functions are population invariant is typically addressed in terms of practical…
Dietary guidelines in the Czech Republic. II.: Nutritional profiles of food groups.
Brázdová, Z; Fiala, J; Bauerová, J; Mullerová, D
2000-11-01
Modern dietary guidelines set in terms of food groups are easy to use and understand for target populations, but rather complicated from the point of view of quantification, i.e. the correctly set number of recommended servings in different population groups according to age, sex, physical activity and physiological status on the basis of required intake of energy and individual nutrients. It is the use of abstract comprehensive food groups that makes it impossible to use a simple database of food tables based on the content of nutrients in individual foods, rather than their groups. Using groups requires that their nutritional profiles be established, i.e. that an average content of nutrients and energy for individual groups be calculated. To calculate nutritional profiles for Czech dietary guidelines, the authors used three different methods: (1) Simple profiles, with all commodities with significant representation in the Czech food basket represented in equal amounts. (2) Profiles based on typical servings, with the same commodities as in (1) but in characteristic intake quantities (typical servings). (3) Food basket-based profiles with commodities constituting the Czech food basket in quantities identical for that basket. The results showed significant differences in profiles calculated by different methods. Calculated nutrient intakes were particularly influenced by the size of typical servings and it is therefore essential that a realistic size of servings be used in calculations. The consistent use of recommended food items throughout all food groups and subgroups is very important. The number of servings of foods from the five food groups is not enough if a suitable food item is not chosen within individual groups. On the basis of their findings, the authors fully recommend the use of nutritional profiles based on typical servings that give a realistic idea of the probable energy and nutrient content in the recommended daily intake. In view of regional cultural differences, national nutritional profiles play a vital importance. Population studies investigating the size of the typical servings and the most frequently occurring commodities in the food basket should be made every three years. Nutritional profiles designed in this way constitute an important starting point for setting national dietary guidelines, their implementation and revisions.
NASA Astrophysics Data System (ADS)
Biondi, Gabriele; Mauro, Stefano; Pastorelli, Stefano; Sorli, Massimo
2018-05-01
One of the key functionalities required by an Active Debris Removal mission is the assessment of the target kinematics and inertial properties. Passive sensors, such as stereo cameras, are often included in the onboard instrumentation of a chaser spacecraft for capturing sequential photographs and for tracking features of the target surface. A plenty of methods, based on Kalman filtering, are available for the estimation of the target's state from feature positions; however, to guarantee the filter convergence, they typically require continuity of measurements and the capability of tracking a fixed set of pre-defined features of the object. These requirements clash with the actual tracking conditions: failures in feature detection often occur and the assumption of having some a-priori knowledge about the shape of the target could be restrictive in certain cases. The aim of the presented work is to propose a fault-tolerant alternative method for estimating the angular velocity and the relative magnitudes of the principal moments of inertia of the target. Raw data regarding the positions of the tracked features are processed to evaluate corrupted values of a 3-dimentional parameter which entirely describes the finite screw motion of the debris and which primarily is invariant on the particular set of considered features of the object. Missing values of the parameter are completely restored exploiting the typical periodicity of the rotational motion of an uncontrolled satellite: compressed sensing techniques, typically adopted for recovering images or for prognostic applications, are herein used in a completely original fashion for retrieving a kinematic signal that appears sparse in the frequency domain. Due to its invariance about the features, no assumptions are needed about the target's shape and continuity of the tracking. The obtained signal is useful for the indirect evaluation of an attitude signal that feeds an unscented Kalman filter for the estimation of the global rotational state of the target. The results of the computer simulations showed a good robustness of the method and its potential applicability for general motion conditions of the target.
Applications of hybrid genetic algorithms in seismic tomography
NASA Astrophysics Data System (ADS)
Soupios, Pantelis; Akca, Irfan; Mpogiatzis, Petros; Basokur, Ahmet T.; Papazachos, Constantinos
2011-11-01
Almost all earth sciences inverse problems are nonlinear and involve a large number of unknown parameters, making the application of analytical inversion methods quite restrictive. In practice, most analytical methods are local in nature and rely on a linearized form of the problem equations, adopting an iterative procedure which typically employs partial derivatives in order to optimize the starting (initial) model by minimizing a misfit (penalty) function. Unfortunately, especially for highly non-linear cases, the final model strongly depends on the initial model, hence it is prone to solution-entrapment in local minima of the misfit function, while the derivative calculation is often computationally inefficient and creates instabilities when numerical approximations are used. An alternative is to employ global techniques which do not rely on partial derivatives, are independent of the misfit form and are computationally robust. Such methods employ pseudo-randomly generated models (sampling an appropriately selected section of the model space) which are assessed in terms of their data-fit. A typical example is the class of methods known as genetic algorithms (GA), which achieves the aforementioned approximation through model representation and manipulations, and has attracted the attention of the earth sciences community during the last decade, with several applications already presented for several geophysical problems. In this paper, we examine the efficiency of the combination of the typical regularized least-squares and genetic methods for a typical seismic tomography problem. The proposed approach combines a local (LOM) and a global (GOM) optimization method, in an attempt to overcome the limitations of each individual approach, such as local minima and slow convergence, respectively. The potential of both optimization methods is tested and compared, both independently and jointly, using the several test models and synthetic refraction travel-time date sets that employ the same experimental geometry, wavelength and geometrical characteristics of the model anomalies. Moreover, real data from a crosswell tomographic project for the subsurface mapping of an ancient wall foundation are used for testing the efficiency of the proposed algorithm. The results show that the combined use of both methods can exploit the benefits of each approach, leading to improved final models and producing realistic velocity models, without significantly increasing the required computation time.
Prediction of Sublimation Pressures of Low Volatility Solids
NASA Astrophysics Data System (ADS)
Drake, Bruce Douglas
Sublimation pressures are required for solid-vapor phase equilibrium models in design of processes such as supercritical fluid extraction, sublimation purification and vapor epitaxy. The objective of this work is to identify and compare alternative methods for predicting sublimation pressures. A bibliography of recent sublimation data is included. Corresponding states methods based on the triple point (rather than critical point) are examined. A modified Trouton's rule is the preferred method for estimating triple point pressure in the absence of any sublimation data. Only boiling and melting temperatures are required. Typical error in log_{10} P _{rm triple} is 0.3. For lower temperature estimates, the slope of the sublimation curve is predicted by a correlation based on molar volume. Typical error is 10% of slope. Molecular dynamics methods for surface modeling are tested as estimators of vapor pressure. The time constants of the vapor and solid phases are too different to allow the vapor to come to thermal equilibrium with the solid. The method shows no advantages in prediction of sublimation pressure but provides insight into appropriate models and experimental methods for sublimation. Density-dependent augmented van der Waals equations of state based on hard-sphere distribution functions are examined. The perturbation term is almost linear and is well fit by a simple quadratic. Use of the equation provides reasonable fitting of sublimation pressures from one data point. Order-of-magnitude estimation is possible from melting temperature and solid molar volume. The inverse -12 fluid is used to develop an additional equation of state. Sublimation pressure results, including quality of pressure predictions, are similar to the hard-sphere results. Three-body (Axilrod -Teller) interactions are used to improve results.
Measurement of incident molecular temperature in the formation of organic thin films
NASA Astrophysics Data System (ADS)
Abe, Takahiro; Matsubara, Ryosuke; Hayakawa, Munetaka; Shimoyama, Akifumi; Tanaka, Takaaki; Tsuji, Akira; Takahashi, Yoshikazu; Kubono, Atsushi
2018-03-01
To investigate the effects of incident molecular temperature on organic-thin-film growth by vacuum evaporation, quantitative analysis of molecular temperature is required. In this study, we propose a method of determining molecular temperature based on the heat exchange between a platinum filament and molecular vapor. Molecular temperature is estimated from filament temperature, which remains unchanged even under molecular vapor supply. The results indicate that our method has sufficient sensitivity to evaluate the molecular temperature under the typical growth rate used for fabrication of functional organic thin films.
Genomic Data Quality Impacts Automated Detection of Lateral Gene Transfer in Fungi
Dupont, Pierre-Yves; Cox, Murray P.
2017-01-01
Lateral gene transfer (LGT, also known as horizontal gene transfer), an atypical mechanism of transferring genes between species, has almost become the default explanation for genes that display an unexpected composition or phylogeny. Numerous methods of detecting LGT events all rely on two fundamental strategies: primary structure composition or gene tree/species tree comparisons. Discouragingly, the results of these different approaches rarely coincide. With the wealth of genome data now available, detection of laterally transferred genes is increasingly being attempted in large uncurated eukaryotic datasets. However, detection methods depend greatly on the quality of the underlying genomic data, which are typically complex for eukaryotes. Furthermore, given the automated nature of genomic data collection, it is typically impractical to manually verify all protein or gene models, orthology predictions, and multiple sequence alignments, requiring researchers to accept a substantial margin of error in their datasets. Using a test case comprising plant-associated genomes across the fungal kingdom, this study reveals that composition- and phylogeny-based methods have little statistical power to detect laterally transferred genes. In particular, phylogenetic methods reveal extreme levels of topological variation in fungal gene trees, the vast majority of which show departures from the canonical species tree. Therefore, it is inherently challenging to detect LGT events in typical eukaryotic genomes. This finding is in striking contrast to the large number of claims for laterally transferred genes in eukaryotic species that routinely appear in the literature, and questions how many of these proposed examples are statistically well supported. PMID:28235827
Rights of Conscience Protections for Armed Forces Service Members and Their Chaplains
2015-07-22
established five categories of religious accommodation requests: dietary, grooming, medical , uniform, and worship practices.2 • Dietary: typically, these... Medical : typically, these are requests for a waiver of mandatory immunizations. • Uniform: typically, these are requests to wear religious jewelry or...service members in their units. Requirements A chaplain applicant is required to meet DoD medical and physical standards for commissioning as an
Pulsed Electric Propulsion Thrust Stand Calibration Method
NASA Technical Reports Server (NTRS)
Wong, Andrea R.; Polzin, Kurt A.; Pearson, J. Boise
2011-01-01
The evaluation of the performance of any propulsion device requires the accurate measurement of thrust. While chemical rocket thrust is typically measured using a load cell, the low thrust levels associated with electric propulsion (EP) systems necessitate the use of much more sensitive measurement techniques. The design and development of electric propulsion thrust stands that employ a conventional hanging pendulum arm connected to a balance mechanism consisting of a secondary arm and variable linkage have been reported in recent publications by Polzin et al. These works focused on performing steady-state thrust measurements and employed a static analysis of the thrust stand response. In the present work, we present a calibration method and data that will permit pulsed thrust measurements using the Variable Amplitude Hanging Pendulum with Extended Range (VAHPER) thrust stand. Pulsed thrust measurements are challenging in general because the pulsed thrust (impulse bit) occurs over a short timescale (typically 1 micros to 1 millisecond) and cannot be resolved directly. Consequently, the imparted impulse bit must be inferred through observation of the change in thrust stand motion effected by the pulse. Pulsed thrust measurements have typically only consisted of single-shot operation. In the present work, we discuss repetition-rate pulsed thruster operation and describe a method to perform these measurements. The thrust stand response can be modeled as a spring-mass-damper system with a repetitive delta forcing function to represent the impulsive action of the thruster.
Interpreting Space-Mission LET Requirements for SEGR in Power MOSFETs
NASA Technical Reports Server (NTRS)
Lauenstein, J. M.; Ladbury, R. L.; Batchelor, D. A.; Goldsman, N.; Kim, H. S.; Phan, A. M.
2010-01-01
A Technology Computer Aided Design (TCAD) simulation-based method is developed to evaluate whether derating of high-energy heavy-ion accelerator test data bounds the risk for single-event gate rupture (SEGR) from much higher energy on-orbit ions for a mission linear energy transfer (LET) requirement. It is shown that a typical derating factor of 0.75 applied to a single-event effect (SEE) response curve defined by high-energy accelerator SEGR test data provides reasonable on-orbit hardness assurance, although in a high-voltage power MOSFET, it did not bound the risk of failure.
Next-generation genotype imputation service and methods.
Das, Sayantan; Forer, Lukas; Schönherr, Sebastian; Sidore, Carlo; Locke, Adam E; Kwong, Alan; Vrieze, Scott I; Chew, Emily Y; Levy, Shawn; McGue, Matt; Schlessinger, David; Stambolian, Dwight; Loh, Po-Ru; Iacono, William G; Swaroop, Anand; Scott, Laura J; Cucca, Francesco; Kronenberg, Florian; Boehnke, Michael; Abecasis, Gonçalo R; Fuchsberger, Christian
2016-10-01
Genotype imputation is a key component of genetic association studies, where it increases power, facilitates meta-analysis, and aids interpretation of signals. Genotype imputation is computationally demanding and, with current tools, typically requires access to a high-performance computing cluster and to a reference panel of sequenced genomes. Here we describe improvements to imputation machinery that reduce computational requirements by more than an order of magnitude with no loss of accuracy in comparison to standard imputation tools. We also describe a new web-based service for imputation that facilitates access to new reference panels and greatly improves user experience and productivity.
Reducing the computational footprint for real-time BCPNN learning
Vogginger, Bernhard; Schüffny, René; Lansner, Anders; Cederström, Love; Partzsch, Johannes; Höppner, Sebastian
2015-01-01
The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware. PMID:25657618
Reducing the computational footprint for real-time BCPNN learning.
Vogginger, Bernhard; Schüffny, René; Lansner, Anders; Cederström, Love; Partzsch, Johannes; Höppner, Sebastian
2015-01-01
The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware.
A method for the measurement and analysis of ride vibrations of transportation systems
NASA Technical Reports Server (NTRS)
Catherines, J. J.; Clevenson, S. A.; Scholl, H. F.
1972-01-01
The measurement and recording of ride vibrations which affect passenger comfort in transportation systems and the subsequent data-reduction methods necessary for interpreting the data present exceptional instrumentation requirements and necessitate the use of computers for specialized analysis techniques. A method is presented for both measuring and analyzing ride vibrations of the type encountered in ground and air transportation systems. A portable system for measuring and recording low-frequency, low-amplitude accelerations and specialized data-reduction procedures are described. Sample vibration measurements in the form of statistical parameters representative of typical transportation systems are also presented to demonstrate the utility of the techniques.
Chlorotrimethylsilane activation of acylcyanamides for the synthesis of mono-N-acylguanidines
Haussener, Travis J.; Mack, James B. C.
2011-01-01
A simple and efficient one-pot method for the synthesis of mono-protected guanidines is presented. Treatment of an acylcyanamide with chlorotrimethylsilane generates a reactive N-silylcarbodiimide capable of guanylating a variety of amines. Typically the reaction is complete in 15 min for primary and secondary aliphatic amines at rt. Hindered amines and anilines are also competent nucleophiles but require extended reaction times. PMID:21732649
A Nanolayer Copper Coating for Prevention Nosocomial Multi-Drug Resistant Infections
2016-10-01
done using a standard antimicrobial assay defined in ASTM method E2149-01 for determining antibacterial activity of immobilized agents under...are optimized for bacterial growth and do not represent typical conditions. Therefore, most studies reporting on the antimicrobial activity of a...given substance are done in physiological buffer (with the exception of many antibiotics as these require active metabolism for efficacy). This can be
Multibody Parachute Flight Simulations for Planetary Entry Trajectories Using "Equilibrium Points"
NASA Technical Reports Server (NTRS)
Raiszadeh, Ben
2003-01-01
A method has been developed to reduce numerical stiffness and computer CPU requirements of high fidelity multibody flight simulations involving parachutes for planetary entry trajectories. Typical parachute entry configurations consist of entry bodies suspended from a parachute, connected by flexible lines. To accurately calculate line forces and moments, the simulations need to keep track of the point where the flexible lines meet (confluence point). In previous multibody parachute flight simulations, the confluence point has been modeled as a point mass. Using a point mass for the confluence point tends to make the simulation numerically stiff, because its mass is typically much less that than the main rigid body masses. One solution for stiff differential equations is to use a very small integration time step. However, this results in large computer CPU requirements. In the method described in the paper, the need for using a mass as the confluence point has been eliminated. Instead, the confluence point is modeled using an "equilibrium point". This point is calculated at every integration step as the point at which sum of all line forces is zero (static equilibrium). The use of this "equilibrium point" has the advantage of both reducing the numerical stiffness of the simulations, and eliminating the dynamical equations associated with vibration of a lumped mass on a high-tension string.
Evaluation of coated metallic bipolar plates for polymer electrolyte membrane fuel cells
NASA Astrophysics Data System (ADS)
Yoon, Wonseok; Huang, Xinyu; Fazzino, Paul; Reifsnider, Kenneth L.; Akkaoui, Michael A.
Metallic bipolar plates for polymer electrolyte membrane (PEM) fuel cells typically require coatings for corrosion protection. Other requirements for the corrosion protective coatings include low electrical contact resistance, good mechanical robustness, low material and fabrication cost. The authors have evaluated a number of protective coatings deposited on stainless steel substrates by electroplating and physical vapor deposition (PVD) methods. The coatings are screened with an electrochemical polarization test for corrosion resistance; then the contact resistance test was performed on selected coatings. The coating investigated include Gold with various thicknesses (2 nm, 10 nm, and 1 μm), Titanium, Zirconium, Zirconium Nitride (ZrN), Zirconium Niobium (ZrNb), and Zirconium Nitride with a Gold top layer (ZrNAu). The substrates include three types of stainless steel: 304, 310, and 316. The results show that Zr-coated samples satisfy the DOE target for corrosion resistance at both anode and cathode sides in typical PEM fuel cell environments in the short-term, but they do not meet the DOE contact resistance goal. Very thin gold coating (2 nm) can significantly decrease the electrical contact resistance, however a relatively thick gold coating (>10 nm) with our deposition method is necessary for adequate corrosion resistance, particularly for the cathode side of the bipolar plate.
A method to improve the range resolution in stepped frequency continuous wave radar
NASA Astrophysics Data System (ADS)
Kaczmarek, Paweł
2018-04-01
In the paper one of high range resolution methods - Aperture Sampling - was analysed. Unlike MUSIC based techniques it proved to be very efficient in terms of achieving unambiguous synthetic range profile for ultra-wideband stepped frequency continuous wave radar. Assuming that minimal distance required to separate two targets in depth (distance) corresponds to -3 dB width of received echo, AS provided a 30,8 % improvement in range resolution in analysed scenario, when compared to results of applying IFFT. Output data is far superior in terms of both improved range resolution and reduced side lobe level than used typically in this area Inverse Fourier Transform. Furthermore it does not require prior knowledge or an estimate of number of targets to be detected in a given scan.
Comments on settling chamber design for quiet, blowdown wind tunnels
NASA Technical Reports Server (NTRS)
Beckwith, I. E.
1981-01-01
Transfer of an existing continous circuit supersonic wind tunnel to Langley and its operation there as a blowdown tunnel is planned. Flow disturbance requirements in the supply section and methods for reducing the high level broad band acoustic disturbances present in typical blowdown tunnels are reviewed. Based on recent data and the analysis of two blowdown facilities at Langley, methods for reducing the total turbulence levels in the settling chamber, including both acoustic and vorticity modes, to less than one percent are recommended. The pertinent design details of the damping screens and honeycomb and the recommended minimum pressure drop across the porous components providing the required two orders of magnitude attenuation of acoustic noise levels are given. A suggestion for the support structure of these high pressure drop porous components is offered.
Goddard, Amanda F; Staudinger, Benjamin J; Dowd, Scot E; Joshi-Datar, Amruta; Wolcott, Randall D; Aitken, Moira L; Fligner, Corinne L; Singh, Pradeep K
2012-08-21
Recent work using culture-independent methods suggests that the lungs of cystic fibrosis (CF) patients harbor a vast array of bacteria not conventionally implicated in CF lung disease. However, sampling lung secretions in living subjects requires that expectorated specimens or collection devices pass through the oropharynx. Thus, contamination could confound results. Here, we compared culture-independent analyses of throat and sputum specimens to samples directly obtained from the lungs at the time of transplantation. We found that CF lungs with advanced disease contained relatively homogenous populations of typical CF pathogens. In contrast, upper-airway specimens from the same subjects contained higher levels of microbial diversity and organisms not typically considered CF pathogens. Furthermore, sputum exhibited day-to-day variation in the abundance of nontypical organisms, even in the absence of clinical changes. These findings suggest that oropharyngeal contamination could limit the accuracy of DNA-based measurements on upper-airway specimens. This work highlights the importance of sampling procedures for microbiome studies and suggests that methods that account for contamination are needed when DNA-based methods are used on clinical specimens.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewellen, J. W.; Noonan, J.; Accelerator Systems Division
2005-01-01
Conventional {pi}-mode rf photoinjectors typically use magnetic solenoids for emittance compensation. This provides independent focusing strength but can complicate rf power feed placement, introduce asymmetries (due to coil crossovers), and greatly increase the cost of the photoinjector. Cathode-region focusing can also provide for a form of emittance compensation. Typically this method strongly couples focusing strength to the field gradient on the cathode, however, and usually requires altering the longitudinal position of the cathode to change the focusing. We propose a new method for achieving cathode-region variable-strength focusing for emittance compensation. The new method reduces the coupling to the gradient onmore » the cathode and does not require a change in the longitudinal position of the cathode. Expected performance for an S-band system is similar to conventional solenoid-based designs. This paper presents the results of rf cavity and beam dynamics simulations of the new design. We have proposed a method for performing emittance compensation using a cathode-region focusing scheme. This technique allows the focusing strength to be adjusted somewhat independently of the on-axis field strength. Beam dynamics calculations indicate performance should be comparable to presently in-use emittance compensation schemes, with a simpler configuration and fewer possibilities for emittance degradation due to the focusing optics. There are several potential difficulties with this approach, including cathode material selection, cathode heating, and peak fields in the gun. We hope to begin experimenting with a cathode of this type in the near future, and several possibilities exist for reducing the peak gradients to more acceptable levels.« less
Design of Phase II Non-inferiority Trials.
Jung, Sin-Ho
2017-09-01
With the development of inexpensive treatment regimens and less invasive surgical procedures, we are confronted with non-inferiority study objectives. A non-inferiority phase III trial requires a roughly four times larger sample size than that of a similar standard superiority trial. Because of the large required sample size, we often face feasibility issues to open a non-inferiority trial. Furthermore, due to lack of phase II non-inferiority trial design methods, we do not have an opportunity to investigate the efficacy of the experimental therapy through a phase II trial. As a result, we often fail to open a non-inferiority phase III trial and a large number of non-inferiority clinical questions still remain unanswered. In this paper, we want to develop some designs for non-inferiority randomized phase II trials with feasible sample sizes. At first, we review a design method for non-inferiority phase III trials. Subsequently, we propose three different designs for non-inferiority phase II trials that can be used under different settings. Each method is demonstrated with examples. Each of the proposed design methods is shown to require a reasonable sample size for non-inferiority phase II trials. The three different non-inferiority phase II trial designs are used under different settings, but require similar sample sizes that are typical for phase II trials.
A new concept for airship mooring and ground handling
NASA Technical Reports Server (NTRS)
Vaughan, J. C.
1975-01-01
Calculations were made to determine the feasibility of applying the negative air cushion (NAC) principle to the mooring of airships. Pressures required for the inflation of the flexible trunks are not excessive and the maintenance of sufficient hold down force is possible in winds up to 50 knots. Fabric strength requirements for a typical NAC sized for a 10-million cubic foot airship were found to be approximately 200 lbs./in. Corresponding power requirements range between 66-HP and 5600-HP. No consideration was given to the internal airship loads caused by the use of a NAC and further analysis in much greater detail is required before this method could be applied to an actual design, however, the basic concept appears to be sound and no problem areas of a fundamental nature are apparent.
Group Contribution Methods for Phase Equilibrium Calculations.
Gmehling, Jürgen; Constantinescu, Dana; Schmid, Bastian
2015-01-01
The development and design of chemical processes are carried out by solving the balance equations of a mathematical model for sections of or the whole chemical plant with the help of process simulators. For process simulation, besides kinetic data for the chemical reaction, various pure component and mixture properties are required. Because of the great importance of separation processes for a chemical plant in particular, a reliable knowledge of the phase equilibrium behavior is required. The phase equilibrium behavior can be calculated with the help of modern equations of state or g(E)-models using only binary parameters. But unfortunately, only a very small part of the experimental data for fitting the required binary model parameters is available, so very often these models cannot be applied directly. To solve this problem, powerful predictive thermodynamic models have been developed. Group contribution methods allow the prediction of the required phase equilibrium data using only a limited number of group interaction parameters. A prerequisite for fitting the required group interaction parameters is a comprehensive database. That is why for the development of powerful group contribution methods almost all published pure component properties, phase equilibrium data, excess properties, etc., were stored in computerized form in the Dortmund Data Bank. In this review, the present status, weaknesses, advantages and disadvantages, possible applications, and typical results of the different group contribution methods for the calculation of phase equilibria are presented.
Abundance and diversity of microbial inhabitants in European spacecraft-associated clean rooms.
Stieglmeier, Michaela; Rettberg, Petra; Barczyk, Simon; Bohmeier, Maria; Pukall, Rüdiger; Wirth, Reinhard; Moissl-Eichinger, Christine
2012-06-01
The determination of the microbial load of a spacecraft en route to interesting extraterrestrial environments is mandatory and currently based on the culturable, heat-shock-surviving portion of microbial contaminants. Our study compared these classical bioburden measurements as required by NASA's and ESA's guidelines for the microbial examination of flight hardware, with molecular analysis methods (16S rRNA gene cloning and quantitative PCR) to further develop our understanding of the diversity and abundance of the microbial communities of spacecraft-associated clean rooms. Three samplings of the Herschel Space Observatory and its surrounding clean rooms were performed in two different European facilities. Molecular analyses detected a broad diversity of microbes typically found in the human microbiome with three bacterial genera (Staphylococcus, Propionibacterium, and Brevundimonas) common to all three locations. Bioburden measurements revealed a low, but heterogeneous, abundance of spore-forming and other heat-resistant microorganisms. Total cell numbers estimated by quantitative real-time PCR were typically 3 orders of magnitude greater than those determined by viable counts, which indicates a tendency for traditional methods to underestimate the extent of clean room bioburden. Furthermore, the molecular methods allowed the detection of a much broader diversity than traditional culture-based methods.
NASA Technical Reports Server (NTRS)
Albright, A. E.
1984-01-01
A glycol-exuding porous leading edge ice protection system was tested in the NASA Icing Research Tunnel. Stainless steel mesh, laser drilled titanium, and composite panels were tested on two general aviation wing sections. Two different glycol-water solutions were evaluated. Minimum glycol flow rates required for anti-icing were obtained as a function of angle of attack, liquid water content, volume median drop diameter, temperature, and velocity. Ice accretions formed after five minutes of icing were shed in three minutes or less using a glycol fluid flow equal to the anti-ice flow rate. Two methods of predicting anti-ice flow rates are presented and compared with a large experimental data base of anti-ice flow rates over a wide range of icing conditions. The first method presented in the ADS-4 document typically predicts flow rates lower than the experimental flow rates. The second method, originally published in 1983, typically predicts flow rates up to 25 percent higher than the experimental flow rates. This method proved to be more consistent between wing-panel configurations. Significant correlation coefficients between the predicted flow rates and the experimental flow rates ranged from .867 to .947.
Spacelab mission dependent training parametric resource requirements study
NASA Technical Reports Server (NTRS)
Ogden, D. H.; Watters, H.; Steadman, J.; Conrad, L.
1976-01-01
Training flows were developed for typical missions, resource relationships analyzed, and scheduling optimization algorithms defined. Parametric analyses were performed to study the effect of potential changes in mission model, mission complexity and training time required on the resource quantities required to support training of payload or mission specialists. Typical results of these analyses are presented both in graphic and tabular form.
Iterative wave-front reconstruction in the Fourier domain.
Bond, Charlotte Z; Correia, Carlos M; Sauvage, Jean-François; Neichel, Benoit; Fusco, Thierry
2017-05-15
The use of Fourier methods in wave-front reconstruction can significantly reduce the computation time for large telescopes with a high number of degrees of freedom. However, Fourier algorithms for discrete data require a rectangular data set which conform to specific boundary requirements, whereas wave-front sensor data is typically defined over a circular domain (the telescope pupil). Here we present an iterative Gerchberg routine modified for the purposes of discrete wave-front reconstruction which adapts the measurement data (wave-front sensor slopes) for Fourier analysis, fulfilling the requirements of the fast Fourier transform (FFT) and providing accurate reconstruction. The routine is used in the adaptation step only and can be coupled to any other Wiener-like or least-squares method. We compare simulations using this method with previous Fourier methods and show an increase in performance in terms of Strehl ratio and a reduction in noise propagation for a 40×40 SPHERE-like adaptive optics system. For closed loop operation with minimal iterations the Gerchberg method provides an improvement in Strehl, from 95.4% to 96.9% in K-band. This corresponds to ~ 40 nm improvement in rms, and avoids the high spatial frequency errors present in other methods, providing an increase in contrast towards the edge of the correctable band.
Beam Steering Devices Reduce Payload Weight
NASA Technical Reports Server (NTRS)
2012-01-01
Scientists have long been able to shift the direction of a laser beam, steering it toward a target, but often the strength and focus of the light is altered. For precision applications, where the quality of the beam cannot be compromised, scientists have typically turned to mechanical steering methods, redirecting the source of the beam by swinging the entire laser apparatus toward the target. Just as the mechanical methods used for turning cars has evolved into simpler, lighter, power steering methods, so has the means by which researchers can direct lasers. Some of the typical contraptions used to redirect lasers are large and bulky, relying on steering gimbals pivoted, rotating supports to shift the device toward its intended target. These devices, some as large and awkward as a piece of heavy luggage, are subject to the same issues confronted by mechanical parts: Components rub, wear out, and get stuck. The poor reliability and bulk not to mention the power requirements to run one of the machines have made mechanical beam steering components less than ideal for use in applications where weight, bulk, and maneuverability are prime concerns, such as on an unmanned aerial vehicle (UAV) or a microscope. The solution to developing reliable, lighter weight, nonmechanical steering methods to replace the hefty steering boxes was to think outside the box, and a NASA research partner did just that by developing a new beam steering method that bends and redirects the beam, as opposed to shifting the entire apparatus. The benefits include lower power requirements, a smaller footprint, reduced weight, and better control and flexibility in steering capabilities. Such benefits are realized without sacrificing aperture size, efficiency, or scanning range, and can be applied to myriad uses: propulsion systems, structures, radiation protection systems, and landing systems.
NASA Astrophysics Data System (ADS)
Adcock Smith, Echo D.
ZnO nanomaterials are being incorporated into next-generation solar cell designs including dye-sensitized solar cells, multijunction solar cells, and quantum dot sensitized solar cells. ZnO nanorod (NR) arrays and nanoparticles (NP) used in these devices are typically fabricated using chemical vapor deposition and/or high-temperature reaction conditions. These methods are costly, require high energy, pressure or excessive time, but produce repeatable, defined growth that is capable of easily incorporating metal dopants. Less expensive methods of fabrication such as chemical bath deposition (CBD) eliminate the costly steps but can suffer from undefined growth, excessive waste and have a difficult time incorporating dopants into ZnO materials without additives or increased pH. This dissertation presents a novel method of growing cobalt and vanadium doped ZnO nanomaterials through microwave synthesis. The cobalt growth was compared to standard CBD and found to be faster, less wasteful, reproducible and better at incorporating cobalt ions into the ZnO lattice than typical oven CBD method. The vanadium doped ZnO microwave synthesis procedure was found to produce nanorods, nanorod arrays, and nanoparticles simultaneously. Neither the cobalt nor the vanadium growth required pH changes, catalysts or additives to assist in doping and therefore use less materials than traditional CBD. This research is important because it offers a simple, quick way to grow ZnO nanostructures and is the first to report on growing both cobalt and vanadium doped zinc oxide nanorod arrays using microwave synthesis. This synthesis method presented is a viable candidate for replacing conventional growth synthesis which will result in lowering the cost and time of production of photovoltaics while helping drive forward the development of next-generation solar cells.
Rapid determination of 226Ra in environmental samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxwell, Sherrod L.; Culligan, Brian K.
A new rapid method for the determination of {sup 228}Ra in natural water samples has been developed at the SRNL/EBL (Savannah River National Lab/ Environmental Bioassay Laboratory) that can be used for emergency response or routine samples. While gamma spectrometry can be employed with sufficient detection limits to determine {sup 228}Ra in solid samples (via {sup 228}Ac) , radiochemical methods that employ gas flow proportional counting techniques typically provide lower MDA (Minimal Detectable Activity) levels for the determination of {sup 228}Ra in water samples. Most radiochemical methods for {sup 228}Ra collect and purify {sup 228}Ra and allow for {sup 228}Acmore » daughter ingrowth for ~36 hours. In this new SRNL/EBL approach, {sup 228}Ac is collected and purified from the water sample without waiting to eliminate this delay. The sample preparation requires only about 4 hours so that {sup 228}Ra assay results on water samples can be achieved in < 6 hours. The method uses a rapid calcium carbonate precipitation enhanced with a small amount of phosphate added to enhance chemical yields (typically >90%), followed by rapid cation exchange removal of calcium. Lead, bismuth, uranium, thorium and protactinium isotopes are also removed by the cation exchange separation. {sup 228}Ac is eluted from the cation resin directly onto a DGA Resin cartridge attached to the bottom of the cation column to purify {sup 228}Ac. DGA Resin also removes lead and bismuth isotopes, along with Sr isotopes and {sup 90}Y. La is used to determine {sup 228}Ac chemical yield via ICP-MS, but {sup 133}Ba can also be used instead if ICP-MS assay is not available. Unlike some older methods, no lead or strontium holdback carriers or continual readjustment of sample pH is required.« less
The ReaxFF reactive force-field: Development, applications, and future directions
Senftle, Thomas; Hong, Sungwook; Islam, Md Mahbubul; ...
2016-03-04
The reactive force-field (ReaxFF) interatomic potential is a powerful computational tool for exploring, developing and optimizing material properties. Methods based on the principles of quantum mechanics (QM), while offering valuable theoretical guidance at the electronic level, are often too computationally intense for simulations that consider the full dynamic evolution of a system. Alternatively, empirical interatomic potentials that are based on classical principles require significantly fewer computational resources, which enables simulations to better describe dynamic processes over longer timeframes and on larger scales. Such methods, however, typically require a predefined connectivity between atoms, precluding simulations that involve reactive events. The ReaxFFmore » method was developed to help bridge this gap. Approaching the gap from the classical side, ReaxFF casts the empirical interatomic potential within a bond-order formalism, thus implicitly describing chemical bonding without expensive QM calculations. As a result, this article provides an overview of the development, application, and future directions of the ReaxFF method.« less
NASA Astrophysics Data System (ADS)
Haupt, Sue Ellen; Beyer-Lout, Anke; Long, Kerrie J.; Young, George S.
Assimilating concentration data into an atmospheric transport and dispersion model can provide information to improve downwind concentration forecasts. The forecast model is typically a one-way coupled set of equations: the meteorological equations impact the concentration, but the concentration does not generally affect the meteorological field. Thus, indirect methods of using concentration data to influence the meteorological variables are required. The problem studied here involves a simple wind field forcing Gaussian dispersion. Two methods of assimilating concentration data to infer the wind direction are demonstrated. The first method is Lagrangian in nature and treats the puff as an entity using feature extraction coupled with nudging. The second method is an Eulerian field approach akin to traditional variational approaches, but minimizes the error by using a genetic algorithm (GA) to directly optimize the match between observations and predictions. Both methods show success at inferring the wind field. The GA-variational method, however, is more accurate but requires more computational time. Dynamic assimilation of a continuous release modeled by a Gaussian plume is also demonstrated using the genetic algorithm approach.
Arsen'eva, T E; Lebedeva, S A; Trukhachev, A L; Vasil'eva, E A; Ivanova, V S; Bozhko, N V
2010-01-01
To characterize species specificity of officially recommended tests for differentiation of Yersiniapestis and Yersinia pseudotuberculosis and propose additional tests allowing for more accurate identification. Natural, laboratory and typical strains oftwo Yersinia species were studied using microbiological, molecular and biochemical methods. For PCR species-specific primers complementary to certain fragments of chromosomal DNA of each species as well as to several plasmid genes of Y. pestis were used. It was shown that such attributes of Y. pestis as form of colonies, fermentation ofrhamnose, melibiose and urea, susceptibility to diagnostic phages, nutritional requirements could be lost in pestis bacterial species or detected in pseudotuberculosis species. Such attribute as mobility as well as positive result of CoA-reaction on fraction V antigen are more reliable. Guaranteed differentiation of typical and changed according to differential tests strains is provided only by PCR-analysis with primers vlml2for/ISrev216 and JS respectively, which are homologous to certain chromosome fragments of one of two Yersinia species.
Fast two-layer two-photon imaging of neuronal cell populations using an electrically tunable lens
Grewe, Benjamin F.; Voigt, Fabian F.; van ’t Hoff, Marcel; Helmchen, Fritjof
2011-01-01
Functional two-photon Ca2+-imaging is a versatile tool to study the dynamics of neuronal populations in brain slices and living animals. However, population imaging is typically restricted to a single two-dimensional image plane. By introducing an electrically tunable lens into the excitation path of a two-photon microscope we were able to realize fast axial focus shifts within 15 ms. The maximum axial scan range was 0.7 mm employing a 40x NA0.8 water immersion objective, plenty for typically required ranges of 0.2–0.3 mm. By combining the axial scanning method with 2D acousto-optic frame scanning and random-access scanning, we measured neuronal population activity of about 40 neurons across two imaging planes separated by 40 μm and achieved scan rates up to 20–30 Hz. The method presented is easily applicable and allows upgrading of existing two-photon microscopes for fast 3D scanning. PMID:21750778
Easi-CRISPR for creating knock-in and conditional knockout mouse models using long ssDNA donors.
Miura, Hiromi; Quadros, Rolen M; Gurumurthy, Channabasavaiah B; Ohtsuka, Masato
2018-01-01
CRISPR/Cas9-based genome editing can easily generate knockout mouse models by disrupting the gene sequence, but its efficiency for creating models that require either insertion of exogenous DNA (knock-in) or replacement of genomic segments is very poor. The majority of mouse models used in research involve knock-in (reporters or recombinases) or gene replacement (e.g., conditional knockout alleles containing exons flanked by LoxP sites). A few methods for creating such models have been reported that use double-stranded DNA as donors, but their efficiency is typically 1-10% and therefore not suitable for routine use. We recently demonstrated that long single-stranded DNAs (ssDNAs) serve as very efficient donors, both for insertion and for gene replacement. We call this method efficient additions with ssDNA inserts-CRISPR (Easi-CRISPR) because it is a highly efficient technology (efficiency is typically 30-60% and reaches as high as 100% in some cases). The protocol takes ∼2 months to generate the founder mice.
Wang, Feng; Kaplan, Jess L; Gold, Benjamin D; Bhasin, Manoj K; Ward, Naomi L; Kellermayer, Richard; Kirschner, Barbara S; Heyman, Melvin B; Dowd, Scot E; Cox, Stephen B; Dogan, Haluk; Steven, Blaire; Ferry, George D; Cohen, Stanley A; Baldassano, Robert N; Moran, Christopher J; Garnett, Elizabeth A; Drake, Lauren; Otu, Hasan H; Mirny, Leonid A; Libermann, Towia A; Winter, Harland S; Korolev, Kirill S
2016-02-02
The relationship between the host and its microbiota is challenging to understand because both microbial communities and their environments are highly variable. We have developed a set of techniques based on population dynamics and information theory to address this challenge. These methods identify additional bacterial taxa associated with pediatric Crohn disease and can detect significant changes in microbial communities with fewer samples than previous statistical approaches required. We have also substantially improved the accuracy of the diagnosis based on the microbiota from stool samples, and we found that the ecological niche of a microbe predicts its role in Crohn disease. Bacteria typically residing in the lumen of healthy individuals decrease in disease, whereas bacteria typically residing on the mucosa of healthy individuals increase in disease. Our results also show that the associations with Crohn disease are evolutionarily conserved and provide a mutual information-based method to depict dysbiosis. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Young, Alexandria; Stillman, Richard; Smith, Martin J; Korstjens, Amanda H
2016-03-01
Forensic investigations involving animal scavenging of human remains require a physical search of the scene and surrounding areas. However, there is currently no standard procedure in the U.K. for physical searches of scavenged human remains. The Winthrop and grid search methods used by police specialist searchers for scavenged remains were examined through the use of mock red fox (Vulpes vulpes) scatter scenes. Forty-two police specialist searchers from two different regions within the U.K. were divided between those briefed and not briefed with fox-typical scavenging information. Briefing searchers with scavenging information significantly affected the recovery of scattered bones (χ(2) = 11.45, df = 1, p = 0.001). Searchers briefed with scavenging information were 2.05 times more likely to recover bones. Adaptions to search methods used by searchers were evident on a regional level, such that searchers more accustom to a peri-urban to rural region recovered a higher percentage of scattered bones (58.33%, n = 84). © 2015 American Academy of Forensic Sciences.
Nano- and micro-materials in the treatment of internal bleeding and uncontrolled hemorrhage.
Gaston, Elizabeth; Fraser, John F; Xu, Zhi Ping; Ta, Hang T
2018-02-01
Internal bleeding is defined as the loss of blood that occurs inside of a body cavity. After a traumatic injury, hemorrhage accounts for over 35% of pre-hospital deaths and 40% of deaths within the first 24 hours. Coagulopathy, a disorder in which the blood is not able to properly form clots, typically develops after traumatic injury and results in a higher rate of mortality. The current methods to treat internal bleeding and coagulopathy are inadequate due to the requirement of extensive medical equipment that is typically not available at the site of injury. To discover a potential route for future research, several current and novel treatment methods have been reviewed and analyzed. The aim of investigating different potential treatment options is to expand available knowledge, while also call attention to the importance of research in the field of treatment for internal bleeding and hemorrhage due to trauma. Copyright © 2017 Elsevier Inc. All rights reserved.
An incremental strategy for calculating consistent discrete CFD sensitivity derivatives
NASA Technical Reports Server (NTRS)
Korivi, Vamshi Mohan; Taylor, Arthur C., III; Newman, Perry A.; Hou, Gene W.; Jones, Henry E.
1992-01-01
In this preliminary study involving advanced computational fluid dynamic (CFD) codes, an incremental formulation, also known as the 'delta' or 'correction' form, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods appear to be needed for future 3D applications; however, because direct solver methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form result in certain difficulties, such as ill-conditioning of the coefficient matrix, which can be overcome when these equations are cast in the incremental form; these and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two laminar sample problems: (1) transonic flow through a double-throat nozzle; and (2) flow over an isolated airfoil.
Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†
Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia
2015-01-01
Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144
Battery Calendar Life Estimator Manual Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jon P. Christophersen; Ira Bloom; Ed Thomas
2012-10-01
The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.
Battery Life Estimator Manual Linear Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jon P. Christophersen; Ira Bloom; Ed Thomas
2009-08-01
The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.
Toussaint, Karen A; Tiger, Jeffrey H
2012-01-01
Covert self-injurious behavior (i.e., behavior that occurs in the absence of other people) can be difficult to treat. Traditional treatments typically have involved sophisticated methods of observation and often have employed positive punishment procedures. The current study evaluated the effectiveness of a variable momentary differential reinforcement contingency in the treatment of covert self-injury. Neither positive punishment nor extinction was required to produce decreased skin picking.
A new method of radio frequency links by coplanar coils for implantable medical devices.
Xue, L; Hao, H W; Li, L; Ma, B Z
2005-01-01
A new method based on coplanar coils for the design of radio frequency links has been developed, to realize the communication between the programming wand and the implantable medical devices with shielding container simply and reliably. With the analysis of electronic and magnetic field theory, the communication model has been established and simulated, and the circuit has been designed and tested. The experimental results are consistent with the simulation fairly well. The voltage transfer ratio of the typical circuit with present parameters can reach as high as 0.02, which can fulfill the requirements of communication.
Research Spotlight: New method to assess coral reef health
NASA Astrophysics Data System (ADS)
Tretkoff, Ernie
2011-03-01
Coral reefs around the world are becoming stressed due to rising temperatures, ocean acidification, overfishing, and other factors. Measuring community level rates of photosynthesis, respiration, and biogenic calcification is essential to assessing the health of coral reef ecosystems because the balance between these processes determines the potential for reef growth and the export of carbon. Measurements of biological productivity have typically been made by tracing changes in dissolved oxygen in seawater as it passes over a reef. However, this is a labor-intensive and difficult method, requiring repeated measurements. (Geophysical Research Letters, doi:10.1029/2010GL046179, 2011)
Space Operations Center orbit altitude selection strategy
NASA Technical Reports Server (NTRS)
Indrikis, J.; Myers, H. L.
1982-01-01
The strategy for the operational altitude selection has to respond to the Space Operation Center's (SOC) maintenance requirements and the logistics demands of the missions to be supported by the SOC. Three orbit strategies are developed: two are constant altitude, and one variable altitude. In order to minimize the effect of atmospheric uncertainty the dynamic altitude method is recommended. In this approach the SOC will operate at the optimum altitude for the prevailing atmospheric conditions and logistics model, provided that mission safety constraints are not violated. Over a typical solar activity cycle this method produces significant savings in the overall logistics cost.
SEM evaluation of metallization on semiconductors. [Scanning Electron Microscope
NASA Technical Reports Server (NTRS)
Fresh, D. L.; Adolphsen, J. W.
1974-01-01
A test method for the evaluation of metallization on semiconductors is presented and discussed. The method has been prepared in MIL-STD format for submittal as a proposed addition to MIL-STD-883. It is applicable to discrete devices and to integrated circuits and specifically addresses batch-process oriented defects. Quantitative accept/reject criteria are given for contact windows, other oxide steps, and general interconnecting metallization. Figures are provided that illustrate typical types of defects. Apparatus specifications, sampling plans, and specimen preparation and examination requirements are described. Procedures for glassivated devices and for multi-metal interconnection systems are included.
Gildea, Richard J; Winter, Graeme
2018-05-01
Combining X-ray diffraction data from multiple samples requires determination of the symmetry and resolution of any indexing ambiguity. For the partial data sets typical of in situ room-temperature experiments, determination of the correct symmetry is often not straightforward. The potential for indexing ambiguity in polar space groups is also an issue, although methods to resolve this are available if the true symmetry is known. Here, a method is presented to simultaneously resolve the determination of the Patterson symmetry and the indexing ambiguity for partial data sets. open access.
MULTIGRAIN: a smoothed particle hydrodynamic algorithm for multiple small dust grains and gas
NASA Astrophysics Data System (ADS)
Hutchison, Mark; Price, Daniel J.; Laibe, Guillaume
2018-05-01
We present a new algorithm, MULTIGRAIN, for modelling the dynamics of an entire population of small dust grains immersed in gas, typical of conditions that are found in molecular clouds and protoplanetary discs. The MULTIGRAIN method is more accurate than single-phase simulations because the gas experiences a backreaction from each dust phase and communicates this change to the other phases, thereby indirectly coupling the dust phases together. The MULTIGRAIN method is fast, explicit and low storage, requiring only an array of dust fractions and their derivatives defined for each resolution element.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Javaid, Zarrar; Unsworth, Charles P., E-mail: c.unsworth@auckland.ac.nz; Boocock, Mark G.
2016-03-15
Purpose: The aim of this work is to demonstrate a new image processing technique that can provide a “near real-time” 3D reconstruction of the articular cartilage of the human knee from MR images which is user friendly. This would serve as a point-of-care 3D visualization tool which would benefit a consultant radiologist in the visualization of the human articular cartilage. Methods: The authors introduce a novel fusion of an adaptation of the contour method known as “contour interpolation (CI)” with radial basis functions (RBFs) which they describe as “CI-RBFs.” The authors also present a spline boundary correction which further enhancesmore » volume estimation of the method. A subject cohort consisting of 17 right nonpathological knees (ten female and seven male) is assessed to validate the quality of the proposed method. The authors demonstrate how the CI-RBF method dramatically reduces the number of data points required for fitting an implicit surface to the entire cartilage, thus, significantly improving the speed of reconstruction over the comparable RBF reconstruction method of Carr. The authors compare the CI-RBF method volume estimation to a typical commercial package (3D DOCTOR), Carr’s RBF method, and a benchmark manual method for the reconstruction of the femoral, tibial, and patellar cartilages. Results: The authors demonstrate how the CI-RBF method significantly reduces the number of data points (p-value < 0.0001) required for fitting an implicit surface to the cartilage, by 48%, 31%, and 44% for the patellar, tibial, and femoral cartilages, respectively. Thus, significantly improving the speed of reconstruction (p-value < 0.0001) by 39%, 40%, and 44% for the patellar, tibial, and femoral cartilages over the comparable RBF model of Carr providing a near real-time reconstruction of 6.49, 8.88, and 9.43 min for the patellar, tibial, and femoral cartilages, respectively. In addition, it is demonstrated how the CI-RBF method matches the volume estimation of a typical commercial package (3D DOCTOR), Carr’s RBF method, and a benchmark manual method for the reconstruction of the femoral, tibial, and patellar cartilages. Furthermore, the performance of the segmentation method used for the extraction of the femoral, tibial, and patellar cartilages is assessed with a Dice similarity coefficient, sensitivity, and specificity measure providing high agreement to manual segmentation. Conclusions: The CI-RBF method provides a fast, accurate, and robust 3D model reconstruction that matches Carr’s RBF method, 3D DOCTOR, and a manual benchmark method in accuracy and significantly improves upon Carr’s RBF method in data requirement and computational speed. In addition, the visualization tool has been designed to quickly segment MR images requiring only four mouse clicks per MR image slice.« less
Collaborative voxel-based surgical virtual environments.
Acosta, Eric; Muniz, Gilbert; Armonda, Rocco; Bowyer, Mark; Liu, Alan
2008-01-01
Virtual Reality-based surgical simulators can utilize Collaborative Virtual Environments (C-VEs) to provide team-based training. To support real-time interactions, C-VEs are typically replicated on each user's local computer and a synchronization method helps keep all local copies consistent. This approach does not work well for voxel-based C-VEs since large and frequent volumetric updates make synchronization difficult. This paper describes a method that allows multiple users to interact within a voxel-based C-VE for a craniotomy simulator being developed. Our C-VE method requires smaller update sizes and provides faster synchronization update rates than volumetric-based methods. Additionally, we address network bandwidth/latency issues to simulate networked haptic and bone drilling tool interactions with a voxel-based skull C-VE.
A Novel Approach to Rotorcraft Damage Tolerance
NASA Technical Reports Server (NTRS)
Forth, Scott C.; Everett, Richard A.; Newman, John A.
2002-01-01
Damage-tolerance methodology is positioned to replace safe-life methodologies for designing rotorcraft structures. The argument for implementing a damage-tolerance method comes from the fundamental fact that rotorcraft structures typically fail by fatigue cracking. Therefore, if technology permits prediction of fatigue-crack growth in structures, a damage-tolerance method should deliver the most accurate prediction of component life. Implementing damage-tolerance (DT) into high-cycle-fatigue (HCF) components will require a shift from traditional DT methods that rely on detecting an initial flaw with nondestructive inspection (NDI) methods. The rapid accumulation of cycles in a HCF component will result in a design based on a traditional DT method that is either impractical because of frequent inspections, or because the design will be too heavy to operate efficiently. Furthermore, once a HCF component develops a detectable propagating crack, the remaining fatigue life is short, sometimes less than one flight hour, which does not leave sufficient time for inspection. Therefore, designing a HCF component will require basing the life analysis on an initial flaw that is undetectable with current NDI technology.
NASA Astrophysics Data System (ADS)
Hritz, Andrew D.; Raymond, Timothy M.; Dutcher, Dabrina D.
2016-08-01
Accurate estimates of particle surface tension are required for models concerning atmospheric aerosol nucleation and activation. However, it is difficult to collect the volumes of atmospheric aerosol required by typical instruments that measure surface tension, such as goniometers or Wilhelmy plates. In this work, a method that measures, ex situ, the surface tension of collected liquid nanoparticles using atomic force microscopy is presented. A film of particles is collected via impaction and is probed using nanoneedle tips with the atomic force microscope. This micro-Wilhelmy method allows for direct measurements of the surface tension of small amounts of sample. This method was verified using liquids, whose surface tensions were known. Particles of ozone oxidized α-pinene, a well-characterized system, were then produced, collected, and analyzed using this method to demonstrate its applicability for liquid aerosol samples. It was determined that oxidized α-pinene particles formed in dry conditions have a surface tension similar to that of pure α-pinene, and oxidized α-pinene particles formed in more humid conditions have a surface tension that is significantly higher.
Economic method for helical gear flank surface characterisation
NASA Astrophysics Data System (ADS)
Koulin, G.; Reavie, T.; Frazer, R. C.; Shaw, B. A.
2018-03-01
Typically the quality of a gear pair is assessed based on simplified geometric tolerances which do not always correlate with functional performance. In order to identify and quantify functional performance based parameters, further development of the gear measurement approach is required. Methodology for interpolation of the full active helical gear flank surface, from sparse line measurements, is presented. The method seeks to identify the minimum number of line measurements required to sufficiently characterise an active gear flank. In the form ground gear example presented, a single helix and three profile line measurements was considered to be acceptable. The resulting surfaces can be used to simulate the meshing engagement of a gear pair and therefore provide insight into functional performance based parameters. Therefore the assessment of the quality can be based on the predicted performance in the context of an application.
Managing Complex IT Security Processes with Value Based Measures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Sheldon, Frederick T; Mili, Ali
2009-01-01
Current trends indicate that IT security measures will need to greatly expand to counter the ever increasingly sophisticated, well-funded and/or economically motivated threat space. Traditional risk management approaches provide an effective method for guiding courses of action for assessment, and mitigation investments. However, such approaches no matter how popular demand very detailed knowledge about the IT security domain and the enterprise/cyber architectural context. Typically, the critical nature and/or high stakes require careful consideration and adaptation of a balanced approach that provides reliable and consistent methods for rating vulnerabilities. As reported in earlier works, the Cyberspace Security Econometrics System provides amore » comprehensive measure of reliability, security and safety of a system that accounts for the criticality of each requirement as a function of one or more stakeholders interests in that requirement. This paper advocates a dependability measure that acknowledges the aggregate structure of complex system specifications, and accounts for variations by stakeholder, by specification components, and by verification and validation impact.« less
ZyFISH: A Simple, Rapid and Reliable Zygosity Assay for Transgenic Mice
McHugh, Donal; O’Connor, Tracy; Bremer, Juliane; Aguzzi, Adriano
2012-01-01
Microinjection of DNA constructs into fertilized mouse oocytes typically results in random transgene integration at a single genomic locus. The resulting transgenic founders can be used to establish hemizygous transgenic mouse lines. However, practical and experimental reasons often require that such lines be bred to homozygosity. Transgene zygosity can be determined by progeny testing assays which are expensive and time-consuming, by quantitative Southern blotting which is labor-intensive, or by quantitative PCR (qPCR) which requires transgene-specific design. Here, we describe a zygosity assessment procedure based on fluorescent in situ hybridization (zyFISH). The zyFISH protocol entails the detection of transgenic loci by FISH and the concomitant assignment of homozygosity using a concise and unbiased scoring system. The method requires small volumes of blood, is scalable to at least 40 determinations per assay, and produces results entirely consistent with the progeny testing assay. This combination of reliability, simplicity and cost-effectiveness makes zyFISH a method of choice for transgenic mouse zygosity determinations. PMID:22666404
Planetary Protection Considerations For Exomars Meteorological Instrumentation.
NASA Astrophysics Data System (ADS)
Camilletti, Adam
2007-10-01
Planetary protection requirements for Oxford University's contribution to the upcoming ESA ExoMars mission are discussed and the current methods being used to fulfil these requirements are detailed and reviewed. Oxford University is supplying temperature and wind sensors to the mission and since these will be exposed to the Martian environment there is a requirement that they are sterilised to stringent COSPAR standards adhered to by ESA. Typically dry heat microbial reduction (DHMR) is used to reduce spacecraft bioburden but the high temperatures involved are not compatible with the some hardware elements. Alternative, low-temperature sterilisation methods are reviewed and their applicability to spacecraft hardware discussed. The use of a commercially available, bench-top endotoxin tester in planetary protection is also discussed and data from preliminary tests performed at Oxford are presented. These devices, which utilise the immune response of horseshoe crabs to the presence of endotoxin, have the potential to reduce the time taken to determine bioburden by removing the need for conventional assaying -a lengthy and sometimes expensive process.
Closed Loop System Identification with Genetic Algorithms
NASA Technical Reports Server (NTRS)
Whorton, Mark S.
2004-01-01
High performance control design for a flexible space structure is challenging since high fidelity plant models are di.cult to obtain a priori. Uncertainty in the control design models typically require a very robust, low performance control design which must be tuned on-orbit to achieve the required performance. Closed loop system identi.cation is often required to obtain a multivariable open loop plant model based on closed-loop response data. In order to provide an accurate initial plant model to guarantee convergence for standard local optimization methods, this paper presents a global parameter optimization method using genetic algorithms. A minimal representation of the state space dynamics is employed to mitigate the non-uniqueness and over-parameterization of general state space realizations. This control-relevant system identi.cation procedure stresses the joint nature of the system identi.cation and control design problem by seeking to obtain a model that minimizes the di.erence between the predicted and actual closed-loop performance.
Optimization Based Efficiencies in First Order Reliability Analysis
NASA Technical Reports Server (NTRS)
Peck, Jeffrey A.; Mahadevan, Sankaran
2003-01-01
This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.
Donovan, Carl; Harwood, John; King, Stephanie; Booth, Cormac; Caneco, Bruno; Walker, Cameron
2016-01-01
There are many developments for offshore renewable energy around the United Kingdom whose installation typically produces large amounts of far-reaching noise, potentially disturbing many marine mammals. The potential to affect the favorable conservation status of many species means extensive environmental impact assessment requirements for the licensing of such installation activities. Quantification of such complex risk problems is difficult and much of the key information is not readily available. Expert elicitation methods can be employed in such pressing cases. We describe the methodology used in an expert elicitation study conducted in the United Kingdom for combining expert opinions based on statistical distributions and copula-like methods.
Valente, Bruno D.; Morota, Gota; Peñagaricano, Francisco; Gianola, Daniel; Weigel, Kent; Rosa, Guilherme J. M.
2015-01-01
The term “effect” in additive genetic effect suggests a causal meaning. However, inferences of such quantities for selection purposes are typically viewed and conducted as a prediction task. Predictive ability as tested by cross-validation is currently the most acceptable criterion for comparing models and evaluating new methodologies. Nevertheless, it does not directly indicate if predictors reflect causal effects. Such evaluations would require causal inference methods that are not typical in genomic prediction for selection. This suggests that the usual approach to infer genetic effects contradicts the label of the quantity inferred. Here we investigate if genomic predictors for selection should be treated as standard predictors or if they must reflect a causal effect to be useful, requiring causal inference methods. Conducting the analysis as a prediction or as a causal inference task affects, for example, how covariates of the regression model are chosen, which may heavily affect the magnitude of genomic predictors and therefore selection decisions. We demonstrate that selection requires learning causal genetic effects. However, genomic predictors from some models might capture noncausal signal, providing good predictive ability but poorly representing true genetic effects. Simulated examples are used to show that aiming for predictive ability may lead to poor modeling decisions, while causal inference approaches may guide the construction of regression models that better infer the target genetic effect even when they underperform in cross-validation tests. In conclusion, genomic selection models should be constructed to aim primarily for identifiability of causal genetic effects, not for predictive ability. PMID:25908318
Bron, Alain M; Viswanathan, Ananth C; Thelen, Ulrich; de Natale, Renato; Ferreras, Antonio; Gundgaard, Jens; Schwartz, Gail; Buchholz, Patricia
2010-01-01
Objective Low vision that causes forfeiture of driver’s licenses and collection of disability pension benefits can lead to negative psychosocial and economic consequences. The purpose of this study was to review the requirements for holding a driver’s license and rules for obtaining a disability pension due to low vision. Results highlight the possibility of using a milestone approach to describe progressive eye disease. Methods Government and research reports, websites, and journal articles were evaluated to review rules and requirements in Germany, Spain, Italy, France, the UK, and the US. Results Visual acuity limits are present in all driver’s license regulations. In most countries, the visual acuity limit is 0.5. Visual field limits are included in some driver’s license regulations. In Europe, binocular visual field requirements typically follow the European Union standard of ≥120°. In the US, the visual field requirements are typically between 110° and 140°. Some countries distinguish between being partially sighted and blind in the definition of legal blindness, and in others there is only one limit. Conclusions Loss of driving privileges could be used as a milestone to monitor progressive eye disease. Forfeiture could be standardized as a best-corrected visual acuity of <0.5 or visual field of <120°, which is consistent in most countries. However, requirements to receive disability pensions were too variable to standardize as milestones in progressive eye disease. Implementation of the World Health Organization criteria for low vision and blindness would help to establish better comparability between countries. PMID:21179219
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, R W; Pember, R B; Elliott, N S
2001-10-22
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditionalmore » AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.« less
NASA Technical Reports Server (NTRS)
Vos, Gordon A.; Fink, Patrick; Ngo, Phong H.; Morency, Richard; Simon, Cory; Williams, Robert E.; Perez, Lance C.
2017-01-01
Space Human Factors and Habitability (SHFH) Element within the Human Research Program (HRP) and the Behavioral Health and Performance (BHP) Element are conducting research regarding Net Habitable Volume (NHV), the internal volume within a spacecraft or habitat that is available to crew for required activities, as well as layout and accommodations within the volume. NASA needs methods to unobtrusively collect NHV data without impacting crew time. Data required includes metrics such as location and orientation of crew, volume used to complete tasks, internal translation paths, flow of work, and task completion times. In less constrained environments methods exist yet many are obtrusive and require significant post-processing. ?Examplesused in terrestrial settings include infrared (IR) retro-reflective marker based motion capture, GPS sensor tracking, inertial tracking, and multi-camera methods ?Due to constraints of space operations many such methods are infeasible. Inertial tracking systems typically rely upon a gravity vector to normalize sensor readings,and traditional IR systems are large and require extensive calibration. ?However, multiple technologies have not been applied to space operations for these purposes. Two of these include: 3D Radio Frequency Identification Real-Time Localization Systems (3D RFID-RTLS) ?Depth imaging systems which allow for 3D motion capture and volumetric scanning (such as those using IR-depth cameras like the Microsoft Kinect or Light Detection and Ranging / Light-Radar systems, referred to as LIDAR)
Mechanical excitation of rodlike particles by a vibrating plate.
Trittel, Torsten; Harth, Kirsten; Stannarius, Ralf
2017-06-01
The experimental realization and investigation of granular gases usually require an initial or permanent excitation of ensembles of particles, either mechanically or electromagnetically. One typical method is the energy supply by a vibrating plate or container wall. We study the efficiency of such an excitation of cylindrical particles by a sinusoidally oscillating wall and characterize the distribution of kinetic energies of excited particles over their degrees of freedom. The influences of excitation frequency and amplitude are analyzed.
2016-01-01
Family Policy’s SECO program, which reviewed existing SECO metrics and data sources, as well as analytic methods of previ- ous research, to determine ...process that requires an iterative cycle of assessment of collected data (typically, but not solely, quantitative data) to determine whether SECO...RAND suggests five steps to develop and implement the SECO inter- nal monitoring system: Step 1. Describe the logic or theory of how activities are
Copper-catalyzed decarboxylative trifluoromethylation of allylic bromodifluoroacetates.
Ambler, Brett R; Altman, Ryan A
2013-11-01
The development of new synthetic fluorination reactions has important implications in medicinal, agricultural, and materials chemistries. Given the prevalence and accessibility of alcohols, methods to convert alcohols to trifluoromethanes are desirable. However, this transformation typically requires four-step processes, specialty chemicals, and/or stoichiometric metals to access the trifluoromethyl-containing product. A two-step copper-catalyzed decarboxylative protocol for converting allylic alcohols to trifluoromethanes is reported. Preliminary mechanistic studies distinguish this reaction from previously reported Cu-mediated reactions.
Parallel processing implementations of a contextual classifier for multispectral remote sensing data
NASA Technical Reports Server (NTRS)
Siegel, H. J.; Swain, P. H.; Smith, B. W.
1980-01-01
Contextual classifiers are being developed as a method to exploit the spatial/spectral context of a pixel to achieve accurate classification. Classification algorithms such as the contextual classifier typically require large amounts of computation time. One way to reduce the execution time of these tasks is through the use of parallelism. The applicability of the CDC flexible processor system and of a proposed multimicroprocessor system (PASM) for implementing contextual classifiers is examined.
Modified electrokinetic sample injection method in chromatography and electrophoresis analysis
Davidson, J. Courtney; Balch, Joseph W.
2001-01-01
A sample injection method for horizontal configured multiple chromatography or electrophoresis units, each containing a number of separation/analysis channels, that enables efficient introduction of analyte samples. This method for loading when taken in conjunction with horizontal microchannels allows much reduced sample volumes and a means of sample stacking to greatly reduce the concentration of the sample. This reduction in the amount of sample can lead to great cost savings in sample preparation, particularly in massively parallel applications such as DNA sequencing. The essence of this method is in preparation of the input of the separation channel, the physical sample introduction, and subsequent removal of excess material. By this method, sample volumes of 100 nanoliter to 2 microliters have been used successfully, compared to the typical 5 microliters of sample required by the prior separation/analysis method.
Automatic approach to deriving fuzzy slope positions
NASA Astrophysics Data System (ADS)
Zhu, Liang-Jun; Zhu, A.-Xing; Qin, Cheng-Zhi; Liu, Jun-Zhi
2018-03-01
Fuzzy characterization of slope positions is important for geographic modeling. Most of the existing fuzzy classification-based methods for fuzzy characterization require extensive user intervention in data preparation and parameter setting, which is tedious and time-consuming. This paper presents an automatic approach to overcoming these limitations in the prototype-based inference method for deriving fuzzy membership value (or similarity) to slope positions. The key contribution is a procedure for finding the typical locations and setting the fuzzy inference parameters for each slope position type. Instead of being determined totally by users in the prototype-based inference method, in the proposed approach the typical locations and fuzzy inference parameters for each slope position type are automatically determined by a rule set based on prior domain knowledge and the frequency distributions of topographic attributes. Furthermore, the preparation of topographic attributes (e.g., slope gradient, curvature, and relative position index) is automated, so the proposed automatic approach has only one necessary input, i.e., the gridded digital elevation model of the study area. All compute-intensive algorithms in the proposed approach were speeded up by parallel computing. Two study cases were provided to demonstrate that this approach can properly, conveniently and quickly derive the fuzzy slope positions.
A Fault Alarm and Diagnosis Method Based on Sensitive Parameters and Support Vector Machine
NASA Astrophysics Data System (ADS)
Zhang, Jinjie; Yao, Ziyun; Lv, Zhiquan; Zhu, Qunxiong; Xu, Fengtian; Jiang, Zhinong
2015-08-01
Study on the extraction of fault feature and the diagnostic technique of reciprocating compressor is one of the hot research topics in the field of reciprocating machinery fault diagnosis at present. A large number of feature extraction and classification methods have been widely applied in the related research, but the practical fault alarm and the accuracy of diagnosis have not been effectively improved. Developing feature extraction and classification methods to meet the requirements of typical fault alarm and automatic diagnosis in practical engineering is urgent task. The typical mechanical faults of reciprocating compressor are presented in the paper, and the existing data of online monitoring system is used to extract fault feature parameters within 15 types in total; the inner sensitive connection between faults and the feature parameters has been made clear by using the distance evaluation technique, also sensitive characteristic parameters of different faults have been obtained. On this basis, a method based on fault feature parameters and support vector machine (SVM) is developed, which will be applied to practical fault diagnosis. A better ability of early fault warning has been proved by the experiment and the practical fault cases. Automatic classification by using the SVM to the data of fault alarm has obtained better diagnostic accuracy.
An efficient strongly coupled immersed boundary method for deforming bodies
NASA Astrophysics Data System (ADS)
Goza, Andres; Colonius, Tim
2016-11-01
Immersed boundary methods treat the fluid and immersed solid with separate domains. As a result, a nonlinear interface constraint must be satisfied when these methods are applied to flow-structure interaction problems. This typically results in a large nonlinear system of equations that is difficult to solve efficiently. Often, this system is solved with a block Gauss-Seidel procedure, which is easy to implement but can require many iterations to converge for small solid-to-fluid mass ratios. Alternatively, a Newton-Raphson procedure can be used to solve the nonlinear system. This typically leads to convergence in a small number of iterations for arbitrary mass ratios, but involves the use of large Jacobian matrices. We present an immersed boundary formulation that, like the Newton-Raphson approach, uses a linearization of the system to perform iterations. It therefore inherits the same favorable convergence behavior. However, we avoid large Jacobian matrices by using a block LU factorization of the linearized system. We derive our method for general deforming surfaces and perform verification on 2D test problems of flow past beams. These test problems involve large amplitude flapping and a wide range of mass ratios. This work was partially supported by the Jet Propulsion Laboratory and Air Force Office of Scientific Research.
Recent research related to prediction of stall/spin characteristics of fighter aircraft
NASA Technical Reports Server (NTRS)
Nguyen, L. T.; Anglin, E. L.; Gilbert, W. P.
1976-01-01
The NASA Langley Research Center is currently engaged in a stall/spin research program to provide the fundamental information and design guidelines required to predict the stall/spin characteristics of fighter aircraft. The prediction methods under study include theoretical spin prediction techniques and piloted simulation studies. The paper discusses the overall status of theoretical techniques including: (1) input data requirements, (2) math model requirements, and (3) correlation between theoretical and experimental results. The Langley Differential Maneuvering Simulator (DMS) facility has been used to evaluate the spin susceptibility of several current fighters during typical air combat maneuvers and to develop and evaluate the effectiveness of automatic departure/spin prevention concepts. The evaluation procedure is described and some of the more significant results of the studies are presented.
Breast surface estimation for radar-based breast imaging systems.
Williams, Trevor C; Sill, Jeff M; Fear, Elise C
2008-06-01
Radar-based microwave breast-imaging techniques typically require the antennas to be placed at a certain distance from or on the breast surface. This requires prior knowledge of the breast location, shape, and size. The method proposed in this paper for obtaining this information is based on a modified tissue sensing adaptive radar algorithm. First, a breast surface detection scan is performed. Data from this scan are used to localize the breast by creating an estimate of the breast surface. If required, the antennas may then be placed at specified distances from the breast surface for a second tumor-sensing scan. This paper introduces the breast surface estimation and antenna placement algorithms. Surface estimation and antenna placement results are demonstrated on three-dimensional breast models derived from magnetic resonance images.
NASA Astrophysics Data System (ADS)
Mai, W.; Zhang, J.-F.; Zhao, X.-M.; Li, Z.; Xu, Z.-W.
2017-11-01
Wastewater from the dye industry is typically analyzed using a standard method for measurement of chemical oxygen demand (COD) or by a single-wavelength spectroscopic method. To overcome the disadvantages of these methods, ultraviolet-visible (UV-Vis) spectroscopy was combined with principal component regression (PCR) and partial least squares regression (PLSR) in this study. Unlike the standard method, this method does not require digestion of the samples for preparation. Experiments showed that the PLSR model offered high prediction performance for COD, with a mean relative error of about 5% for two dyes. This error is similar to that obtained with the standard method. In this study, the precision of the PLSR model decreased with the number of dye compounds present. It is likely that multiple models will be required in reality, and the complexity of a COD monitoring system would be greatly reduced if the PLSR model is used because it can include several dyes. UV-Vis spectroscopy with PLSR successfully enhanced the performance of COD prediction for dye wastewater and showed good potential for application in on-line water quality monitoring.
Real time charge efficiency monitoring for nickel electrodes in NICD and NIH2 cells
NASA Astrophysics Data System (ADS)
Zimmerman, A. H.
1987-09-01
The charge efficiency of nickel-cadmium and nickel-hydrogen battery cells is critical in spacecraft applications for determining the amount of time required for a battery to reach a full state of charge. As the nickel-cadmium or nickel-hydrogen batteries approach about 90 percent state of charge, the charge efficiency begins to drop towards zero, making estimation of the total amount of stored charge uncertain. Charge efficiency estimates are typically based on prior history of available capacity following standardized conditions for charge and discharge. These methods work well as long as performance does not change significantly. A relatively simple method for determining charge efficiencies during real time operation for these battery cells would be a tremendous advantage. Such a method was explored and appears to be quite well suited for application to nickel-cadmium and nickel-hydrogen battery cells. The charge efficiency is monitored in real time, using only voltage measurements as inputs. With further evaluation such a method may provide a means to better manage charge control of batteries, particularly in systems where a high degree of autonomy or system intelligence is required.
Adaptive optimal training of animal behavior
NASA Astrophysics Data System (ADS)
Bak, Ji Hyun; Choi, Jung Yoon; Akrami, Athena; Witten, Ilana; Pillow, Jonathan
Neuroscience experiments often require training animals to perform tasks designed to elicit various sensory, cognitive, and motor behaviors. Training typically involves a series of gradual adjustments of stimulus conditions and rewards in order to bring about learning. However, training protocols are usually hand-designed, and often require weeks or months to achieve a desired level of task performance. Here we combine ideas from reinforcement learning and adaptive optimal experimental design to formulate methods for efficient training of animal behavior. Our work addresses two intriguing problems at once: first, it seeks to infer the learning rules underlying an animal's behavioral changes during training; second, it seeks to exploit these rules to select stimuli that will maximize the rate of learning toward a desired objective. We develop and test these methods using data collected from rats during training on a two-interval sensory discrimination task. We show that we can accurately infer the parameters of a learning algorithm that describes how the animal's internal model of the task evolves over the course of training. We also demonstrate by simulation that our method can provide a substantial speedup over standard training methods.
Real time charge efficiency monitoring for nickel electrodes in NICD and NIH2 cells
NASA Technical Reports Server (NTRS)
Zimmerman, A. H.
1987-01-01
The charge efficiency of nickel-cadmium and nickel-hydrogen battery cells is critical in spacecraft applications for determining the amount of time required for a battery to reach a full state of charge. As the nickel-cadmium or nickel-hydrogen batteries approach about 90 percent state of charge, the charge efficiency begins to drop towards zero, making estimation of the total amount of stored charge uncertain. Charge efficiency estimates are typically based on prior history of available capacity following standardized conditions for charge and discharge. These methods work well as long as performance does not change significantly. A relatively simple method for determining charge efficiencies during real time operation for these battery cells would be a tremendous advantage. Such a method was explored and appears to be quite well suited for application to nickel-cadmium and nickel-hydrogen battery cells. The charge efficiency is monitored in real time, using only voltage measurements as inputs. With further evaluation such a method may provide a means to better manage charge control of batteries, particularly in systems where a high degree of autonomy or system intelligence is required.
Evaluation of a cost-effective loads approach. [shock spectra/impedance method for Viking Orbiter
NASA Technical Reports Server (NTRS)
Garba, J. A.; Wada, B. K.; Bamford, R.; Trubert, M. R.
1976-01-01
A shock spectra/impedance method for loads predictions is used to estimate member loads for the Viking Orbiter, a 7800-lb interplanetary spacecraft that has been designed using transient loads analysis techniques. The transient loads analysis approach leads to a lightweight structure but requires complex and costly analyses. To reduce complexity and cost, a shock spectra/impedance method is currently being used to design the Mariner Jupiter Saturn spacecraft. This method has the advantage of using low-cost in-house loads analysis techniques and typically results in more conservative structural loads. The method is evaluated by comparing the increase in Viking member loads to the loads obtained by the transient loads analysis approach. An estimate of the weight penalty incurred by using this method is presented. The paper also compares the calculated flight loads from the transient loads analyses and the shock spectra/impedance method to measured flight data.
Veenstra, Alexander; Liu, Haitao; Lee, Chieh Allen; Du, Yunpeng; Tang, Jie; Kern, Timothy S.
2015-01-01
Diabetic retinopathy is a major cause of visual impairment, which continues to increase in prevalence as more and more people develop diabetes. Despite the importance of vision, the retina is one of the smallest tissues in the body, and specialized techniques to study the retinopathy have been developed. This chapter will summarize several methods used to (i) induce diabetes, (ii) maintain the diabetic animals throughout the months required for the development of typical vascular histopathology, (iii) evaluate vascular histopathology of diabetic retinopathy, and (iv) quantitate abnormalities implicated in the development of the retinopathy. PMID:26331759
New high resolution Random Telegraph Noise (RTN) characterization method for resistive RAM
NASA Astrophysics Data System (ADS)
Maestro, M.; Diaz, J.; Crespo-Yepes, A.; Gonzalez, M. B.; Martin-Martinez, J.; Rodriguez, R.; Nafria, M.; Campabadal, F.; Aymerich, X.
2016-01-01
Random Telegraph Noise (RTN) is one of the main reliability problems of resistive switching-based memories. To understand the physics behind RTN, a complete and accurate RTN characterization is required. The standard equipment used to analyse RTN has a typical time resolution of ∼2 ms which prevents evaluating fast phenomena. In this work, a new RTN measurement procedure, which increases the measurement time resolution to 2 μs, is proposed. The experimental set-up, together with the recently proposed Weighted Time Lag (W-LT) method for the analysis of RTN signals, allows obtaining a more detailed and precise information about the RTN phenomenon.
AVIRIS calibration using the cloud-shadow method
NASA Technical Reports Server (NTRS)
Carder, K. L.; Reinersman, P.; Chen, R. F.
1993-01-01
More than 90 percent of the signal at an ocean-viewing, satellite sensor is due to the atmosphere, so a 5 percent sensor-calibration error viewing a target that contributes but 10 percent of the signal received at the sensor may result in a target-reflectance error of more than 50 percent. Since prelaunch calibration accuracies of 5 percent are typical of space-sensor requirements, recalibration of the sensor using ground-base methods is required for low-signal target. Known target reflectance or water-leaving radiance spectra and atmospheric correction parameters are required. In this article we describe an atmospheric-correction method that uses cloud shadowed pixels in combination with pixels in a neighborhood region of similar optical properties to remove atmospheric effects from ocean scenes. These neighboring pixels can then be used as known reflectance targets for validation of the sensor calibration and atmospheric correction. The method uses the difference between water-leaving radiance values for these two regions. This allows nearly identical optical contributions to the two signals (e.g., path radiance and Fresnel-reflected skylight) to be removed, leaving mostly solar photons backscattered from beneath the sea to dominate the residual signal. Normalization by incident solar irradiance reaching the sea surface provides the remote-sensing reflectance of the ocean at the location of the neighbor region.
Fernandes, Richard; Carey, Conn; Hynes, James; Papkovsky, Dmitri
2013-01-01
The importance of food safety has resulted in a demand for a more rapid, high-throughput method for total viable count (TVC). The industry standard for TVC determination (ISO 4833:2003) is widely used but presents users with some drawbacks. The method is materials- and labor-intensive, requiring multiple agar plates per sample. More importantly, the method is slow, with 72 h typically required for a definitive result. Luxcel Biosciences has developed the GreenLight Model 960, a microtiter plate-based assay providing a rapid high-throughput method of aerobic bacterial load assessment through analysis of microbial oxygen consumption. Results are generated in 1-12 h, depending on microbial load. The mix and measure procedure allows rapid detection of microbial oxygen consumption and equates oxygen consumption to microbial load (CFU/g), providing a simple, sensitive means of assessing the microbial contamination levels in foods (1). As bacteria in the test sample grow and respire, they deplete O2, which is detected as an increase in the GreenLight probe signal above the baseline level (2). The time required to reach this increase in signal can be used to calculate the CFU/g of the original sample, based on a predetermined calibration. The higher the initial microbial load, the earlier this threshold is reached (1).
Easy Leaf Area: Automated digital image analysis for rapid and accurate measurement of leaf area.
Easlon, Hsien Ming; Bloom, Arnold J
2014-07-01
Measurement of leaf areas from digital photographs has traditionally required significant user input unless backgrounds are carefully masked. Easy Leaf Area was developed to batch process hundreds of Arabidopsis rosette images in minutes, removing background artifacts and saving results to a spreadsheet-ready CSV file. • Easy Leaf Area uses the color ratios of each pixel to distinguish leaves and calibration areas from their background and compares leaf pixel counts to a red calibration area to eliminate the need for camera distance calculations or manual ruler scale measurement that other software methods typically require. Leaf areas estimated by this software from images taken with a camera phone were more accurate than ImageJ estimates from flatbed scanner images. • Easy Leaf Area provides an easy-to-use method for rapid measurement of leaf area and nondestructive estimation of canopy area from digital images.
Online learning in optical tomography: a stochastic approach
NASA Astrophysics Data System (ADS)
Chen, Ke; Li, Qin; Liu, Jian-Guo
2018-07-01
We study the inverse problem of radiative transfer equation (RTE) using stochastic gradient descent method (SGD) in this paper. Mathematically, optical tomography amounts to recovering the optical parameters in RTE using the incoming–outgoing pair of light intensity. We formulate it as a PDE-constraint optimization problem, where the mismatch of computed and measured outgoing data is minimized with same initial data and RTE constraint. The memory and computation cost it requires, however, is typically prohibitive, especially in high dimensional space. Smart iterative solvers that only use partial information in each step is called for thereafter. Stochastic gradient descent method is an online learning algorithm that randomly selects data for minimizing the mismatch. It requires minimum memory and computation, and advances fast, therefore perfectly serves the purpose. In this paper we formulate the problem, in both nonlinear and its linearized setting, apply SGD algorithm and analyze the convergence performance.
Stellinga, Daan; Pietrzyk, Monika E; Glackin, James M E; Wang, Yue; Bansal, Ashu K; Turnbull, Graham A; Dholakia, Kishan; Samuel, Ifor D W; Krauss, Thomas F
2018-03-27
Optical vortex beams are at the heart of a number of novel research directions, both as carriers of information and for the investigation of optical activity and chiral molecules. Optical vortex beams are beams of light with a helical wavefront and associated orbital angular momentum. They are typically generated using bulk optics methods or by a passive element such as a forked grating or a metasurface to imprint the required phase distribution onto an incident beam. Since many applications benefit from further miniaturization, a more integrated yet scalable method is highly desirable. Here, we demonstrate the generation of an azimuthally polarized vortex beam directly by an organic semiconductor laser that meets these requirements. The organic vortex laser uses a spiral grating as a feedback element that gives control over phase, handedness, and degree of helicity of the emitted beam. We demonstrate vortex beams up to an azimuthal index l = 3 that can be readily multiplexed into an array configuration.
Cryogenic temperature effects on sting-balance deflections in the National Transonic Facility
NASA Technical Reports Server (NTRS)
Popernack, Thomas G., Jr.; Adcock, Jerry B.
1990-01-01
An investigation was conducted at the National Transonic Facility (NTF) to document the change in sting-balance deflections from ambient to cryogenic temperatures. Space limitations in some NTF models do not allow the use of on-board angle of attack instrumentation. In order to obtain angle of attack data, pre-determined sting-balance bending data must be combined with arc sector angle measurements. Presently, obtaining pretest sting-balance data requires several cryogenic cycles and cold loadings over a period of several days. A method of reducing the calibration time required is to obtain only ambient temperature sting-balance bending data and correct for changes in material properties at cryogenic temperatures. To validate this method, two typical NTF sting-balance combinations were tested. The test results show excellent agreement with the predicted values and the repeatability of the data was 0.01 degree.
Noninvasive determination of optical lever sensitivity in atomic force microscopy
NASA Astrophysics Data System (ADS)
Higgins, M. J.; Proksch, R.; Sader, J. E.; Polcik, M.; Mc Endoo, S.; Cleveland, J. P.; Jarvis, S. P.
2006-01-01
Atomic force microscopes typically require knowledge of the cantilever spring constant and optical lever sensitivity in order to accurately determine the force from the cantilever deflection. In this study, we investigate a technique to calibrate the optical lever sensitivity of rectangular cantilevers that does not require contact to be made with a surface. This noncontact approach utilizes the method of Sader et al. [Rev. Sci. Instrum. 70, 3967 (1999)] to calibrate the spring constant of the cantilever in combination with the equipartition theorem [J. L. Hutter and J. Bechhoefer, Rev. Sci. Instrum. 64, 1868 (1993)] to determine the optical lever sensitivity. A comparison is presented between sensitivity values obtained from conventional static mode force curves and those derived using this noncontact approach for a range of different cantilevers in air and liquid. These measurements indicate that the method offers a quick, alternative approach for the calibration of the optical lever sensitivity.
Learning gestures for customizable human-computer interaction in the operating room.
Schwarz, Loren Arthur; Bigdelou, Ali; Navab, Nassir
2011-01-01
Interaction with computer-based medical devices in the operating room is often challenging for surgeons due to sterility requirements and the complexity of interventional procedures. Typical solutions, such as delegating the interaction task to an assistant, can be inefficient. We propose a method for gesture-based interaction in the operating room that surgeons can customize to personal requirements and interventional workflow. Given training examples for each desired gesture, our system learns low-dimensional manifold models that enable recognizing gestures and tracking particular poses for fine-grained control. By capturing the surgeon's movements with a few wireless body-worn inertial sensors, we avoid issues of camera-based systems, such as sensitivity to illumination and occlusions. Using a component-based framework implementation, our method can easily be connected to different medical devices. Our experiments show that the approach is able to robustly recognize learned gestures and to distinguish these from other movements.
Neural network evaluation of reflectometry density profiles for control purposes
NASA Astrophysics Data System (ADS)
Santos, J.; Nunes, F.; Manso, M.; Nunes, I.
1999-01-01
Broadband reflectometry is a diagnostic that is able to measure the density profile with high spatial and temporal resolutions, therefore it can be used to improve the performance of advanced tokamak operation modes and to supplement or correct the magnetics for plasma position control. To perform these tasks real-time processing is needed. Here we present a method that uses a neural network to make a fast evaluation of radial positions for selected density layers. Typical ASDEX Upgrade density profiles were used to generate the simulated network training and test sets. It is shown that the method has the potential to meet the tight timing requirements of control applications with the required accuracy. The network is also able to provide an accurate estimation of the position of density layers below the first density layer which is probed by an O-mode reflectometer, provided that it is trained with a realistic density profile model.
Ionization-Assisted Getter Pumping for Ultra-Stable Trapped Ion Frequency Standards
NASA Technical Reports Server (NTRS)
Tjoelker, Robert L.; Burt, Eric A.
2010-01-01
A method eliminates (or recovers from) residual methane buildup in getter-pumped atomic frequency standard systems by applying ionizing assistance. Ultra-high stability trapped ion frequency standards for applications requiring very high reliability, and/or low power and mass (both for ground-based and space-based platforms) benefit from using sealed vacuum systems. These systems require careful material selection and system processing (cleaning and high-temperature bake-out). Even under the most careful preparation, residual hydrogen outgassing from vacuum chamber walls typically limits the base pressure. Non-evaporable getter pumps (NEGs) provide a convenient pumping option for sealed systems because of low mass and volume, and no power once activated. An ion gauge in conjunction with a NEG can be used to provide a low mass, low-power method for avoiding the deleterious effects of methane buildup in high-performance frequency standard vacuum systems.
Si-strip photon counting detectors for contrast-enhanced spectral mammography
NASA Astrophysics Data System (ADS)
Chen, Buxin; Reiser, Ingrid; Wessel, Jan C.; Malakhov, Nail; Wawrzyniak, Gregor; Hartsough, Neal E.; Gandhi, Thulasi; Chen, Chin-Tu; Iwanczyk, Jan S.; Barber, William C.
2015-08-01
We report on the development of silicon strip detectors for energy-resolved clinical mammography. Typically, X-ray integrating detectors based on scintillating cesium iodide CsI(Tl) or amorphous selenium (a-Se) are used in most commercial systems. Recently, mammography instrumentation has been introduced based on photon counting Si strip detectors. The required performance for mammography in terms of the output count rate, spatial resolution, and dynamic range must be obtained with sufficient field of view for the application, thus requiring the tiling of pixel arrays and particular scanning techniques. Room temperature Si strip detector, operating as direct conversion x-ray sensors, can provide the required speed when connected to application specific integrated circuits (ASICs) operating at fast peaking times with multiple fixed thresholds per pixel, provided that the sensors are designed for rapid signal formation across the X-ray energy ranges of the application. We present our methods and results from the optimization of Si-strip detectors for contrast enhanced spectral mammography. We describe the method being developed for quantifying iodine contrast using the energy-resolved detector with fixed thresholds. We demonstrate the feasibility of the method by scanning an iodine phantom with clinically relevant contrast levels.
NASA Technical Reports Server (NTRS)
DeLoach, Richard; Micol, John R.
2011-01-01
The factors that determine data volume requirements in a typical wind tunnel test are identified. It is suggested that productivity in wind tunnel testing can be enhanced by managing the inference error risk associated with evaluating residuals in a response surface modeling experiment. The relationship between minimum data volume requirements and the factors upon which they depend is described and certain simplifications to this relationship are realized when specific model adequacy criteria are adopted. The question of response model residual evaluation is treated and certain practical aspects of response surface modeling are considered, including inference subspace truncation. A wind tunnel test plan developed by using the Modern Design of Experiments illustrates the advantages of an early estimate of data volume requirements. Comparisons are made with a representative One Factor At a Time (OFAT) wind tunnel test matrix developed to evaluate a surface to air missile.
AMPS data management concepts. [Atmospheric, Magnetospheric and Plasma in Space experiment
NASA Technical Reports Server (NTRS)
Metzelaar, P. N.
1975-01-01
Five typical AMPS experiments were formulated to allow simulation studies to verify data management concepts. Design studies were conducted to analyze these experiments in terms of the applicable procedures, data processing and displaying functions. Design concepts for AMPS data management system are presented which permit both automatic repetitive measurement sequences and experimenter-controlled step-by-step procedures. Extensive use is made of a cathode ray tube display, the experimenters' alphanumeric keyboard, and the computer. The types of computer software required by the system and the possible choices of control and display procedures available to the experimenter are described for several examples. An electromagnetic wave transmission experiment illustrates the methods used to analyze data processing requirements.
Reproducible analyses of microbial food for advanced life support systems
NASA Technical Reports Server (NTRS)
Petersen, Gene R.
1988-01-01
The use of yeasts in controlled ecological life support systems (CELSS) for microbial food regeneration in space required the accurate and reproducible analysis of intracellular carbohydrate and protein levels. The reproducible analysis of glycogen was a key element in estimating overall content of edibles in candidate yeast strains. Typical analytical methods for estimating glycogen in Saccharomyces were not found to be entirely aplicable to other candidate strains. Rigorous cell lysis coupled with acid/base fractionation followed by specific enzymatic glycogen analyses were required to obtain accurate results in two strains of Candida. A profile of edible fractions of these strains was then determined. The suitability of yeasts as food sources in CELSS food production processes is discussed.
Influence of Initial Inclined Surface Crack on Estimated Residual Fatigue Lifetime of Railway Axle
NASA Astrophysics Data System (ADS)
Náhlík, Luboš; Pokorný, Pavel; Ševčík, Martin; Hutař, Pavel
2016-11-01
Railway axles are subjected to cyclic loading which can lead to fatigue failure. For safe operation of railway axles a damage tolerance approach taking into account a possible defect on railway axle surface is often required. The contribution deals with an estimation of residual fatigue lifetime of railway axle with initial inclined surface crack. 3D numerical model of inclined semi-elliptical surface crack in railway axle was developed and its curved propagation through the axle was simulated by finite element method. Presence of press-fitted wheel in the vicinity of initial crack was taken into account. A typical loading spectrum of railway axle was considered and residual fatigue lifetime was estimated by NASGRO approach. Material properties of typical axle steel EA4T were considered in numerical calculations and lifetime estimation.
Computationally efficient control allocation
NASA Technical Reports Server (NTRS)
Durham, Wayne (Inventor)
2001-01-01
A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.
Apollo experience report: Mission evaluation team postflight documentation
NASA Technical Reports Server (NTRS)
Dodson, J. W.; Cordiner, D. H.
1975-01-01
The various postflight reports prepared by the mission evaluation team, including the final mission evaluation report, report supplements, anomaly reports, and the 5-day mission report, are described. The procedures for preparing each report from the inputs of the various disciplines are explained, and the general method of reporting postflight results is discussed. Recommendations for postflight documentation in future space programs are included. The official requirements for postflight documentation and a typical example of an anomaly report are provided as appendixes.
Methods and benefits of experimental seismic evaluation of nuclear power plants. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1979-07-01
This study reviews experimental techniques, instrumentation requirements, safety considerations, and benefits of performing vibration tests on nuclear power plant containments and internal components. The emphasis is on testing to improve seismic structural models. Techniques for identification of resonant frequencies, damping, and mode shapes, are discussed. The benefits of testing with regard to increased damping and more accurate computer models are oulined. A test plan, schedule and budget are presented for a typical PWR nuclear power plant.
1985-03-18
Implications Obtaining crack growth behavior for cracks rO.5 mm long ( deep ) requires tracking procedures other than typically used methods. ASTM stand...Kmx. When helpful, relevant fractography is included. Where reference is made to the long crack trend, the results presented in Figure 12(b) for the... learned is that continuum fracture mechanics must be applied with caution in dealing with coarse-grained high-strength materials. That is, rp/a, based on
Pyrolytic graphite collector development program
NASA Technical Reports Server (NTRS)
Wilkins, W. J.
1982-01-01
Pyrolytic graphite promises to have significant advantages as a material for multistage depressed collector electrodes. Among these advantages are lighter weight, improved mechanical stiffness under shock and vibration, reduced secondary electron back-streaming for higher efficiency, and reduced outgassing at higher operating temperatures. The essential properties of pyrolytic graphite and the necessary design criteria are discussed. This includes the study of suitable electrode geometries and methods of attachment to other metal and ceramic collector components consistent with typical electrical, thermal, and mechanical requirements.
NASA Technical Reports Server (NTRS)
Sheibley, D. W.
1974-01-01
The technology and methods developed at the Plum Brook Reactor to analyze 1000 samples per year and report data on as many as 56 elements are described. The manpower for the complete analysis of 20 to 24 samples per week required only 3 to 3.5 hours per sample. The solutions to problems encountered in sample preparation, irradiation, and counting are discussed. The automation of data reduction is described. Typical data on various sample matrices are presented.
Numerical Calculation of Non-uniform Magnetization Using Experimental Magnetic Field Data
NASA Astrophysics Data System (ADS)
Jhun, Bukyoung; Jhun, Youngseok; Kim, Seung-wook; Han, JungHyun
2018-05-01
A relation between the distance from the surface of a magnet and the number of cells required for a numerical calculation in order to secure the error below a certain threshold is derived. We also developed a method to obtain the magnetization at each part of the magnet from the experimentally measured magnetic field. This method is applied to three magnets with distinct patterns on magnetic-field-viewing film. Each magnet showed a unique pattern of magnetization. We found that the magnet that shows symmetric magnetization on the magnetic-field-viewing film is not uniformly magnetized. This method can be useful comparing the magnetization between magnets that yield typical magnetic field and those that yield atypical magnetic field.
NASA Technical Reports Server (NTRS)
Cannone, Jaime J.; Barnes, Cindy L.; Achari, Aniruddha; Kundrot, Craig E.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
The Sparse Matrix approach for obtaining lead crystallization conditions has proven to be very fruitful for the crystallization of proteins and nucleic acids. Here we report a Sparse Matrix developed specifically for the crystallization of protein-DNA complexes. This method is rapid and economical, typically requiring 2.5 mg of complex to test 48 conditions. The method was originally developed to crystallize basic fibroblast growth factor (bFGF) complexed with DNA sequences identified through in vitro selection, or SELEX, methods. Two DNA aptamers that bind with approximately nanomolar affinity and inhibit the angiogenic properties of bFGF were selected for co-crystallization. The Sparse Matrix produced lead crystallization conditions for both bFGF-DNA complexes.
Calcium phosphate-based coatings on titanium and its alloys.
Narayanan, R; Seshadri, S K; Kwon, T Y; Kim, K H
2008-04-01
Use of titanium as biomaterial is possible because of its very favorable biocompatibility with living tissue. Titanium implants having calcium phosphate coatings on their surface show good fixation to the bone. This review covers briefly the requirements of typical biomaterials and narrowly focuses on the works on titanium. Calcium phosphate ceramics for use in implants are introduced and various methods of producing calcium phosphate coating on titanium substrates are elaborated. Advantages and disadvantages of each type of coating from the view point of process simplicity, cost-effectiveness, stability of the coatings, coating integration with the bone, cell behavior, and so forth are highlighted. Taking into account all these factors, the efficient method(s) of producing these coatings are indicated finally.
A rapid method for quantification of 242Pu in urine using extraction chromatography and ICP-MS
Gallardo, Athena Marie; Than, Chit; Wong, Carolyn; ...
2017-01-01
Occupational exposure to plutonium is generally monitored through analysis of urine samples. Typically, plutonium is separated from the sample and other actinides, and the concentration is determined using alpha spectroscopy. Current methods for separations and analysis are lengthy and require long count times. A new method for monitoring occupational exposure levels of plutonium has been developed, which requires fewer steps and overall less time than the alpha spectroscopy method. In this method, the urine is acidified, and a 239Pu internal standard is added. The urine is digested in a microwave oven, and plutonium is separated using an Eichrom TRU Resinmore » column. The plutonium is eluted, and the eluant is injected directly into the Inductively Coupled Plasma–Mass Spectrometer (ICP-MS). Compared to a direct “dilute and shoot” method, a 30-fold improvement in sensitivity is achieved. This method was validated by analyzing several batches of spiked samples. Based on these analyses, a combined standard uncertainty plot, which relates uncertainty to concentration, was produced. As a result, the MDA 95 was calculated to be 7.0 × 10 –7 μg L –1, and the Lc95 was calculated to be 3.5 × 10 –7 μg L –1 for this method.« less
A comparison of in vitro cytotoxicity assays in medical device regulatory studies.
Liu, Xuemei; Rodeheaver, Denise P; White, Jeffrey C; Wright, Ann M; Walker, Lisa M; Zhang, Fan; Shannon, Stephen
2018-06-06
Medical device biocompatibility testing is used to evaluate the risk of adverse effects on tissues from exposure to leachates/extracts. A battery of tests is typically recommended in accordance with regulatory standards to determine if the device is biocompatible. In vitro cytotoxicity, a key element of the standards, is a required endpoint for all types of medical devices. Each validated cytotoxicity method has different methodology and acceptance criteria that could influence the selection of a specific test. In addition, some guidances are more specific than others as to the recommended test methods. For example, the International Organization for Standardization (ISO 1 ) cites preference for quantitative methods (e.g., tetrazolium (MTT/XTT), neutral red (NR), or colony formation assays (CFA)) over qualitative methods (e.g., elution, agar overlay/diffusion, or direct), while a recent ISO standard for contact lens/lens care solutions specifically requires a qualitative direct test. Qualitative methods are described in United States Pharmacopeia (USP) while quantitative CFAs are listed in Japan guidance. The aim of this review is to compare the methodologies such as test article preparation, test conditions, and criteria for six cytotoxicity methods recommended in regulatory standards in order to inform decisions on which method(s) to select during the medical device safety evaluation. Copyright © 2018. Published by Elsevier Inc.
Collins, R Lorraine; Kashdan, Todd B; Koutsky, James R; Morsheimer, Elizabeth T; Vetter, Charlene J
2008-01-01
Underage drinkers typically have not developed regular patterns of drinking and so are likely to exhibit situational variation in alcohol intake, including binge drinking. Information about such variation is not well captured by quantity/frequency (QF) measures, which require that drinkers blend information over time to derive a representative estimate of "typical" drinking. The Timeline Followback (TLFB) method is designed to retrospectively capture situational variations in drinking during a specific period of time. We compared our newly-developed Self-administered TLFB (STLFB) measure to a QF measure for reporting alcohol intake. Our sample of 429 (men=204; women=225) underage (i.e., age 18-20 years) drinkers completed the two drinking measures and reported on alcohol problems. The STLFB and QF measures converged in assessing typical daily intake, but the STLFB provided more information about situational variations in alcohol use and better identification of regular versus intermittent binge drinkers. Regular binge drinkers reported more alcohol problems. The STLFB is an easy-to-administer measure of variations in alcohol intake, which can be useful for understanding drinking behavior.
Almutairy, Meznah; Torng, Eric
2018-01-01
Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.
Torng, Eric
2018-01-01
Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989
Space station needs, attributes and architectural options study. Volume 3: Requirements
NASA Technical Reports Server (NTRS)
1983-01-01
A typical system specification format is presented and requirements are compiled. A Program Specification Tree is shown showing a high inclination space station and a low inclination space station with their typical element breakdown, also represented along the top blocks are the interfaces with other systems. The specification format is directed at the Low Inclination space station.
Abdi, Reza; Yasi, Mehdi
2015-01-01
The assessment of environmental flows in rivers is of vital importance for preserving riverine ecosystem processes. This paper addresses the evaluation of environmental flow requirements in three reaches along a typical perennial river (the Zab transboundary river, in north-west Iran), using different hydraulic, hydrological and ecological methods. The main objective of this study came from the construction of three dams and inter-basin transfer of water from the Zab River to the Urmia Lake. Eight hydrological methods (i.e. Tennant, Tessman, flow duration curve analysis, range of variability approach, Smakhtin, flow duration curve shifting, desktop reserve and 7Q2&10 (7-day low flow with a 2- and 10-year return period)); two hydraulic methods (slope value and maximum curvature); and two habitat simulation methods (hydraulic-ecologic, and Q Equation based on water quality indices) were used. Ecological needs of the riverine key species (mainly Barbus capito fish), river geometries, natural flow regime and the environmental status of river management were the main indices for determining the minimum flow requirements. The results indicate that the order of 35%, 17% and 18% of the mean annual flow are to be maintained for the upper, middle and downstream river reaches, respectively. The allocated monthly flow rates in the three Dams steering program are not sufficient to preserve the Zab River life.
Validation of Quantitative HPLC Method for Bacosides in KeenMind.
Dowell, Ashley; Davidson, George; Ghosh, Dilip
2015-01-01
Brahmi (Bacopa monnieri) has been used by Ayurvedic medical practitioners in India for almost 3000 years. The pharmacological properties of Bacopa monnieri were studied extensively and the activities were attributed mainly due to the presence of characteristic saponins called "bacosides." Bacosides are complex mixture of structurally closely related compounds, glycosides of either jujubogenin or pseudojujubogenin. The popularity of herbal medicines and increasing clinical evidence to support associated health claims require standardisation of the phytochemical actives contained in these products. However, unlike allopathic medicines which typically contain a single active compound, herbal medicines are typically complex mixtures of various phytochemicals. The assay for bacosides in the British Pharmacopoeia monograph for Bacopa monnieri exemplifies that only a subset of bacosides present are included in the calculation of total bacosides. These results in calculated bacoside values are significantly lower than those attained for the same material using more inclusive techniques such as UV spectroscopy. This study illustrates some of the problems encountered when applying chemical analysis for standardisation of herbal medicines, particularly in relation to the new method development and validation of bacosides from KeenMind.
Validation of Quantitative HPLC Method for Bacosides in KeenMind
Dowell, Ashley; Davidson, George; Ghosh, Dilip
2015-01-01
Brahmi (Bacopa monnieri) has been used by Ayurvedic medical practitioners in India for almost 3000 years. The pharmacological properties of Bacopa monnieri were studied extensively and the activities were attributed mainly due to the presence of characteristic saponins called “bacosides.” Bacosides are complex mixture of structurally closely related compounds, glycosides of either jujubogenin or pseudojujubogenin. The popularity of herbal medicines and increasing clinical evidence to support associated health claims require standardisation of the phytochemical actives contained in these products. However, unlike allopathic medicines which typically contain a single active compound, herbal medicines are typically complex mixtures of various phytochemicals. The assay for bacosides in the British Pharmacopoeia monograph for Bacopa monnieri exemplifies that only a subset of bacosides present are included in the calculation of total bacosides. These results in calculated bacoside values are significantly lower than those attained for the same material using more inclusive techniques such as UV spectroscopy. This study illustrates some of the problems encountered when applying chemical analysis for standardisation of herbal medicines, particularly in relation to the new method development and validation of bacosides from KeenMind. PMID:26448776
NASA Astrophysics Data System (ADS)
de Pascale, P.; Vasile, M.; Casotto, S.
The design of interplanetary trajectories requires the solution of an optimization problem, which has been traditionally solved by resorting to various local optimization techniques. All such approaches, apart from the specific method employed (direct or indirect), require an initial guess, which deeply influences the convergence to the optimal solution. The recent developments in low-thrust propulsion have widened the perspectives of exploration of the Solar System, while they have at the same time increased the difficulty related to the trajectory design process. Continuous thrust transfers, typically characterized by multiple spiraling arcs, have a broad number of design parameters and thanks to the flexibility offered by such engines, they typically turn out to be characterized by a multi-modal domain, with a consequent larger number of optimal solutions. Thus the definition of the first guesses is even more challenging, particularly for a broad search over the design parameters, and it requires an extensive investigation of the domain in order to locate the largest number of optimal candidate solutions and possibly the global optimal one. In this paper a tool for the preliminary definition of interplanetary transfers with coast-thrust arcs and multiple swing-bys is presented. Such goal is achieved combining a novel methodology for the description of low-thrust arcs, with a global optimization algorithm based on a hybridization of an evolutionary step and a deterministic step. Low thrust arcs are described in a 3D model in order to account the beneficial effects of low-thrust propulsion for a change of inclination, resorting to a new methodology based on an inverse method. The two-point boundary values problem (TPBVP) associated with a thrust arc is solved by imposing a proper parameterized evolution of the orbital parameters, by which, the acceleration required to follow the given trajectory with respect to the constraints set is obtained simply through algebraic computation. By this method a low-thrust transfer satisfying the boundary conditions on position and velocity can be quickly assessed, with low computational effort since no numerical propagation is required. The hybrid global optimization algorithm is made of a double step. Through the evolutionary search a large number of optima, and eventually the global one, are located, while the deterministic step consists of a branching process that exhaustively partitions the domain in order to have an extensive characterization of such a complex space of solutions. Furthermore, the approach implements a novel direct constraint-handling technique allowing the treatment of mixed-integer nonlinear programming problems (MINLP) typical of multiple swingby trajectories. A low-thrust transfer to Mars is studied as a test bed for the low-thrust model, thus presenting the main characteristics of the different shapes proposed and the features of the possible sub-arcs segmentations between two planets with respect to different objective functions: minimum time and minimum fuel consumption transfers. Other various test cases are also shown and further optimized, proving the effective capability of the proposed tool.
Evaluation of a cost-effective loads approach. [for Viking Orbiter light weight structural design
NASA Technical Reports Server (NTRS)
Garba, J. A.; Wada, B. K.; Bamford, R.; Trubert, M. R.
1976-01-01
A shock spectra/impedance method for loads prediction is used to estimate member loads for the Viking Orbiter, a 7800-lb interplanetary spacecraft that has been designed using transient loads analysis techniques. The transient loads analysis approach leads to a lightweight structure but requires complex and costly analyses. To reduce complexity and cost a shock spectra/impedance method is currently being used to design the Mariner Jupiter Saturn spacecraft. This method has the advantage of using low-cost in-house loads analysis techniques and typically results in more conservative structural loads. The method is evaluated by comparing the increase in Viking member loads to the loads obtained by the transient loads analysis approach. An estimate of the weight penalty incurred by using this method is presented. The paper also compares the calculated flight loads from the transient loads analyses and the shock spectra/impedance method to measured flight data.
Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.
2002-01-01
Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.
Two-way coupled SPH and particle level set fluid simulation.
Losasso, Frank; Talton, Jerry; Kwatra, Nipun; Fedkiw, Ronald
2008-01-01
Grid-based methods have difficulty resolving features on or below the scale of the underlying grid. Although adaptive methods (e.g. RLE, octrees) can alleviate this to some degree, separate techniques are still required for simulating small-scale phenomena such as spray and foam, especially since these more diffuse materials typically behave quite differently than their denser counterparts. In this paper, we propose a two-way coupled simulation framework that uses the particle level set method to efficiently model dense liquid volumes and a smoothed particle hydrodynamics (SPH) method to simulate diffuse regions such as sprays. Our novel SPH method allows us to simulate both dense and diffuse water volumes, fully incorporates the particles that are automatically generated by the particle level set method in under-resolved regions, and allows for two way mixing between dense SPH volumes and grid-based liquid representations.
Spin stability of sounding rocket secondary payloads following high velocity ejections
NASA Astrophysics Data System (ADS)
Nelson, Weston M.
The Auroral Spatial Structures Probe (ASSP) mission is a sounding rocket mission studying solar energy input to space weather. ASSP requires the high velocity ejection (up to 50 m/s) of 6 secondary payloads, spin stabilized perpendicular to the ejection velocity. The proposed scientific instrumentation depends on a high degree of spin stability, requiring a maximum coning angle of less than 5°. It also requires that the spin axis be aligned within 25° of the local magnetic field lines. The maximum velocities of current ejection methods are typically less than 10m/s, and often produce coning angles in excess of 20°. Because of this they do not meet the ASSP mission requirements. To meet these requirements a new ejection method is being developed by NASA Wallops Flight Facility. Success of the technique in meeting coning angle and B-field alignment requirements is evaluated herein by modeling secondary payload dynamic behavior using a 6-DOF dynamic simulation employing state space integration written in MATLAB. Simulation results showed that secondary payload mass balancing is the most important factor in meeting stability requirements. Secondary mass payload properties will be measured using an inverted torsion pendulum. If moment of inertia measurement errors can be reduced to 0.5%, it is possible to achieve mean coning and B-field alignment angles of 2.16° and 2.71°, respectively.
Can we (actually) assess global risk?
NASA Astrophysics Data System (ADS)
Di Baldassarre, Giuliano
2013-04-01
The evaluation of the dynamic interactions of the different components of global risk (e.g. hazard, exposure, vulnerability or resilience) is one of the main challenges in risk assessment and management. In state-of-the-art approaches for the analysis of risk, natural and socio-economic systems are typically treated separately by using different methods. In flood risk studies, for instance, physical scientists typically focus on the study of the probability of flooding (i.e. hazard), while social scientists mainly examine the exposure, vulnerability or resilience to flooding. However, these different components are deeply interconnected. Changes in flood hazard might trigger changes in vulnerability, and vice versa. A typical example of these interactions is the so-called "levee effect", whereby heightening levees to reduce the probability of flooding often leads to increase the potential adverse consequences of flooding as people often perceive that flood risk was completely eliminated once the levee was raised. These interconnections between the different components of risk remain largely unexplored and poorly understood. This lack of knowledge is of serious concern as it limits our ability to plan appropriate risk prevention measures. To design flood control structures, for example, state-of-the-art models can indeed provide quantitative assessments of the corresponding risk reduction associated to the lower probability of flooding. Nevertheless, current methods cannot estimate how, and to what extent, such a reduction might trigger a future increase of the potential adverse consequences of flooding (the aforementioned "levee effect"). Neither can they evaluate how the latter might (in turn) lead to the requirement of additional flood control structures. Thus, while many progresses have been made in the static assessment of flood risk, more inter-disciplinary research is required for the development of methods for dynamic risk assessment, which is very much needed in a rapidly changing world. This presentation will discuss these challenges and describe a few initial attempts aiming to better understand the interactions between the different components of flood risk with reference to diverse case studies in Europe, Central America, and Africa.
Why Don't We Ask? A Complementary Method for Assessing the Status of Great Apes
Meijaard, Erik; Mengersen, Kerrie; Buchori, Damayanti; Nurcahyo, Anton; Ancrenaz, Marc; Wich, Serge; Atmoko, Sri Suci Utami; Tjiu, Albertus; Prasetyo, Didik; Nardiyono; Hadiprakarsa, Yokyok; Christy, Lenny; Wells, Jessie; Albar, Guillaume; Marshall, Andrew J.
2011-01-01
Species conservation is difficult. Threats to species are typically high and immediate. Effective solutions for counteracting these threats, however, require synthesis of high quality evidence, appropriately targeted activities, typically costly implementation, and rapid re-evaluation and adaptation. Conservation management can be ineffective if there is insufficient understanding of the complex ecological, political, socio-cultural, and economic factors that underlie conservation threats. When information about these factors is incomplete, conservation managers may be unaware of the most urgent threats or unable to envision all consequences of potential management strategies. Conservation research aims to address the gap between what is known and what knowledge is needed for effective conservation. Such research, however, generally addresses a subset of the factors that underlie conservation threats, producing a limited, simplistic, and often biased view of complex, real world situations. A combination of approaches is required to provide the complete picture necessary to engage in effective conservation. Orangutan conservation (Pongo spp.) offers an example: standard conservation assessments employ survey methods that focus on ecological variables, but do not usually address the socio-cultural factors that underlie threats. Here, we evaluate a complementary survey method based on interviews of nearly 7,000 people in 687 villages in Kalimantan, Indonesia. We address areas of potential methodological weakness in such surveys, including sampling and questionnaire design, respondent biases, statistical analyses, and sensitivity of resultant inferences. We show that interview-based surveys can provide cost-effective and statistically robust methods to better understand poorly known populations of species that are relatively easily identified by local people. Such surveys provide reasonably reliable estimates of relative presence and relative encounter rates of such species, as well as quantifying the main factors that threaten them. We recommend more extensive use of carefully designed and implemented interview surveys, in conjunction with more traditional field methods. PMID:21483859
High-speed aerodynamic design of space vehicle and required hypersonic wind tunnel facilities
NASA Astrophysics Data System (ADS)
Sakakibara, Seizou; Hozumi, Kouichi; Soga, Kunio; Nomura, Shigeaki
Problems associated with the aerodynamic design of space vehicles with emphasis of the role of hypersonic wind tunnel facilities in the development of the vehicle are considered. At first, to identify wind tunnel and computational fluid dynamics (CFD) requirements, operational environments are postulated for hypervelocity vehicles. Typical flight corridors are shown with the associated flow density: real gas effects, low density flow, and non-equilibrium flow. Based on an evaluation of these flight regimes and consideration of the operational requirements, the wind tunnel testing requirements for the aerodynamic design are examined. Then, the aerodynamic design logic and optimization techniques to develop and refine the configurations in a traditional phased approach based on the programmatic design of space vehicle are considered. Current design methodology for the determination of aerodynamic characteristics for designing the space vehicle, i.e., (1) ground test data, (2) numerical flow field solutions and (3) flight test data, are also discussed. Based on these considerations and by identifying capabilities and limits of experimental and computational methods, the role of a large conventional hypersonic wind tunnel and the high enthalpy tunnel and the interrelationship of the wind tunnels and CFD methods in actual aerodynamic design and analysis are discussed.
Finger-Vein Image Enhancement Using a Fuzzy-Based Fusion Method with Gabor and Retinex Filtering
Shin, Kwang Yong; Park, Young Ho; Nguyen, Dat Tien; Park, Kang Ryoung
2014-01-01
Because of the advantages of finger-vein recognition systems such as live detection and usage as bio-cryptography systems, they can be used to authenticate individual people. However, images of finger-vein patterns are typically unclear because of light scattering by the skin, optical blurring, and motion blurring, which can degrade the performance of finger-vein recognition systems. In response to these issues, a new enhancement method for finger-vein images is proposed. Our method is novel compared with previous approaches in four respects. First, the local and global features of the vein lines of an input image are amplified using Gabor filters in four directions and Retinex filtering, respectively. Second, the means and standard deviations in the local windows of the images produced after Gabor and Retinex filtering are used as inputs for the fuzzy rule and fuzzy membership function, respectively. Third, the optimal weights required to combine the two Gabor and Retinex filtered images are determined using a defuzzification method. Fourth, the use of a fuzzy-based method means that image enhancement does not require additional training data to determine the optimal weights. Experimental results using two finger-vein databases showed that the proposed method enhanced the accuracy of finger-vein recognition compared with previous methods. PMID:24549251
Determination of wall shear stress from mean velocity and Reynolds shear stress profiles
NASA Astrophysics Data System (ADS)
Volino, Ralph J.; Schultz, Michael P.
2018-03-01
An analytical method is presented for determining the Reynolds shear stress profile in steady, two-dimensional wall-bounded flows using the mean streamwise velocity. The method is then utilized with experimental data to determine the local wall shear stress. The procedure is applicable to flows on smooth and rough surfaces with arbitrary pressure gradients. It is based on the streamwise component of the boundary layer momentum equation, which is transformed into inner coordinates. The method requires velocity profiles from at least two streamwise locations, but the formulation of the momentum equation reduces the dependence on streamwise gradients. The method is verified through application to laminar flow solutions and turbulent DNS results from both zero and nonzero pressure gradient boundary layers. With strong favorable pressure gradients, the method is shown to be accurate for finding the wall shear stress in cases where the Clauser fit technique loses accuracy. The method is then applied to experimental data from the literature from zero pressure gradient studies on smooth and rough walls, and favorable and adverse pressure gradient cases on smooth walls. Data from very near the wall are not required for determination of the wall shear stress. Wall friction velocities obtained using the present method agree with those determined in the original studies, typically to within 2%.
Beyond Euler's Method: Implicit Finite Differences in an Introductory ODE Course
ERIC Educational Resources Information Center
Kull, Trent C.
2011-01-01
A typical introductory course in ordinary differential equations (ODEs) exposes students to exact solution methods. However, many differential equations must be approximated with numerical methods. Textbooks commonly include explicit methods such as Euler's and Improved Euler's. Implicit methods are typically introduced in more advanced courses…
Scalable DB+IR Technology: Processing Probabilistic Datalog with HySpirit.
Frommholz, Ingo; Roelleke, Thomas
2016-01-01
Probabilistic Datalog (PDatalog, proposed in 1995) is a probabilistic variant of Datalog and a nice conceptual idea to model Information Retrieval in a logical, rule-based programming paradigm. Making PDatalog work in real-world applications requires more than probabilistic facts and rules, and the semantics associated with the evaluation of the programs. We report in this paper some of the key features of the HySpirit system required to scale the execution of PDatalog programs. Firstly, there is the requirement to express probability estimation in PDatalog. Secondly, fuzzy-like predicates are required to model vague predicates (e.g. vague match of attributes such as age or price). Thirdly, to handle large data sets there are scalability issues to be addressed, and therefore, HySpirit provides probabilistic relational indexes and parallel and distributed processing . The main contribution of this paper is a consolidated view on the methods of the HySpirit system to make PDatalog applicable in real-scale applications that involve a wide range of requirements typical for data (information) management and analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Brennan T; Jager, Yetta; March, Patrick
Reservoir releases are typically operated to maximize the efficiency of hydropower production and the value of hydropower produced. In practice, ecological considerations are limited to those required by law. We first describe reservoir optimization methods that include mandated constraints on environmental and other water uses. Next, we describe research to formulate and solve reservoir optimization problems involving both energy and environmental water needs as objectives. Evaluating ecological objectives is a challenge in these problems for several reasons. First, it is difficult to predict how biological populations will respond to flow release patterns. This problem can be circumvented by using ecologicalmore » models. Second, most optimization methods require complex ecological responses to flow to be quantified by a single metric, preferably a currency that can also represent hydropower benefits. Ecological valuation of instream flows can make optimization methods that require a single currency for the effects of flow on energy and river ecology possible. Third, holistic reservoir optimization problems are unlikely to be structured such that simple solution methods can be used, necessitating the use of flexible numerical methods. One strong advantage of optimal control is the ability to plan for the effects of climate change. We present ideas for developing holistic methods to the point where they can be used for real-time operation of reservoirs. We suggest that developing ecologically sound optimization tools should be a priority for hydropower in light of the increasing value placed on sustaining both the ecological and energy benefits of riverine ecosystems long into the future.« less
NASA Technical Reports Server (NTRS)
Scheper, C.; Baker, R.; Frank, G.; Yalamanchili, S.; Gray, G.
1992-01-01
Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.
Sasakura, D; Nakayama, K; Sakamoto, T; Chikuma, T
2015-05-01
The use of transmission near infrared spectroscopy (TNIRS) is of particular interest in the pharmaceutical industry. This is because TNIRS does not require sample preparation and can analyze several tens of tablet samples in an hour. It has the capability to measure all relevant information from a tablet, while still on the production line. However, TNIRS has a narrow spectrum range and overtone vibrations often overlap. To perform content uniformity testing in tablets by TNIRS, various properties in the tableting process need to be analyzed by a multivariate prediction model, such as a Partial Least Square Regression modeling. One issue is that typical approaches require several hundred reference samples to act as the basis of the method rather than a strategically designed method. This means that many batches are needed to prepare the reference samples; this requires time and is not cost effective. Our group investigated the concentration dependence of the calibration model with a strategic design. Consequently, we developed a more effective approach to the TNIRS calibration model than the existing methodology.
Factorization in large-scale many-body calculations
Johnson, Calvin W.; Ormand, W. Erich; Krastev, Plamen G.
2013-08-07
One approach for solving interacting many-fermion systems is the configuration-interaction method, also sometimes called the interacting shell model, where one finds eigenvalues of the Hamiltonian in a many-body basis of Slater determinants (antisymmetrized products of single-particle wavefunctions). The resulting Hamiltonian matrix is typically very sparse, but for large systems the nonzero matrix elements can nonetheless require terabytes or more of storage. An alternate algorithm, applicable to a broad class of systems with symmetry, in our case rotational invariance, is to exactly factorize both the basis and the interaction using additive/multiplicative quantum numbers; such an algorithm recreates the many-body matrix elementsmore » on the fly and can reduce the storage requirements by an order of magnitude or more. Here, we discuss factorization in general and introduce a novel, generalized factorization method, essentially a ‘double-factorization’ which speeds up basis generation and set-up of required arrays. Although we emphasize techniques, we also place factorization in the context of a specific (unpublished) configuration-interaction code, BIGSTICK, which runs both on serial and parallel machines, and discuss the savings in memory due to factorization.« less
Speeding up 3D speckle tracking using PatchMatch
NASA Astrophysics Data System (ADS)
Zontak, Maria; O'Donnell, Matthew
2016-03-01
Echocardiography provides valuable information to diagnose heart dysfunction. A typical exam records several minutes of real-time cardiac images. To enable complete analysis of 3D cardiac strains, 4-D (3-D+t) echocardiography is used. This results in a huge dataset and requires effective automated analysis. Ultrasound speckle tracking is an effective method for tissue motion analysis. It involves correlation of a 3D kernel (block) around a voxel with kernels in later frames. The search region is usually confined to a local neighborhood, due to biomechanical and computational constraints. For high strains and moderate frame-rates, however, this search region will remain large, leading to a considerable computational burden. Moreover, speckle decorrelation (due to high strains) leads to errors in tracking. To solve this, spatial motion coherency between adjacent voxels should be imposed, e.g., by averaging their correlation functions.1 This requires storing correlation functions for neighboring voxels, thus increasing memory demands. In this work, we propose an efficient search using PatchMatch, 2 a powerful method to find correspondences between images. Here we adopt PatchMatch for 3D volumes and radio-frequency signals. As opposed to an exact search, PatchMatch performs random sampling of the search region and propagates successive matches among neighboring voxels. We show that: 1) Inherently smooth offset propagation in PatchMatch contributes to spatial motion coherence without any additional processing or memory demand. 2) For typical scenarios, PatchMatch is at least 20 times faster than the exact search, while maintaining comparable tracking accuracy.
Ricci, Luca; Formica, Domenico; Sparaci, Laura; Lasorsa, Francesca Romana; Taffoni, Fabrizio; Tamilia, Eleonora; Guglielmelli, Eugenio
2014-01-09
Recent advances in wearable sensor technologies for motion capture have produced devices, mainly based on magneto and inertial measurement units (M-IMU), that are now suitable for out-of-the-lab use with children. In fact, the reduced size, weight and the wireless connectivity meet the requirement of minimum obtrusivity and give scientists the possibility to analyze children's motion in daily life contexts. Typical use of magneto and inertial measurement units (M-IMU) motion capture systems is based on attaching a sensing unit to each body segment of interest. The correct use of this setup requires a specific calibration methodology that allows mapping measurements from the sensors' frames of reference into useful kinematic information in the human limbs' frames of reference. The present work addresses this specific issue, presenting a calibration protocol to capture the kinematics of the upper limbs and thorax in typically developing (TD) children. The proposed method allows the construction, on each body segment, of a meaningful system of coordinates that are representative of real physiological motions and that are referred to as functional frames (FFs). We will also present a novel cost function for the Levenberg-Marquardt algorithm, to retrieve the rotation matrices between each sensor frame (SF) and the corresponding FF. Reported results on a group of 40 children suggest that the method is repeatable and reliable, opening the way to the extensive use of this technology for out-of-the-lab motion capture in children.
Air sampling with solid phase microextraction
NASA Astrophysics Data System (ADS)
Martos, Perry Anthony
There is an increasing need for simple yet accurate air sampling methods. The acceptance of new air sampling methods requires compatibility with conventional chromatographic equipment, and the new methods have to be environmentally friendly, simple to use, yet with equal, or better, detection limits, accuracy and precision than standard methods. Solid phase microextraction (SPME) satisfies the conditions for new air sampling methods. Analyte detection limits, accuracy and precision of analysis with SPME are typically better than with any conventional air sampling methods. Yet, air sampling with SPME requires no pumps, solvents, is re-usable, extremely simple to use, is completely compatible with current chromatographic equipment, and requires a small capital investment. The first SPME fiber coating used in this study was poly(dimethylsiloxane) (PDMS), a hydrophobic liquid film, to sample a large range of airborne hydrocarbons such as benzene and octane. Quantification without an external calibration procedure is possible with this coating. Well understood are the physical and chemical properties of this coating, which are quite similar to those of the siloxane stationary phase used in capillary columns. The log of analyte distribution coefficients for PDMS are linearly related to chromatographic retention indices and to the inverse of temperature. Therefore, the actual chromatogram from the analysis of the PDMS air sampler will yield the calibration parameters which are used to quantify unknown airborne analyte concentrations (ppb v to ppm v range). The second fiber coating used in this study was PDMS/divinyl benzene (PDMS/DVB) onto which o-(2,3,4,5,6- pentafluorobenzyl) hydroxylamine (PFBHA) was adsorbed for the on-fiber derivatization of gaseous formaldehyde (ppb v range), with and without external calibration. The oxime formed from the reaction can be detected with conventional gas chromatographic detectors. Typical grab sampling times were as small as 5 seconds. With 300 seconds sampling, the formaldehyde detection limit was 2.1 ppbv, better than any other 5 minute sampling device for formaldehyde. The first-order rate constant for product formation was used to quantify formaldehyde concentrations without a calibration curve. This spot sampler was used to sample the headspace of hair gel, particle board, plant material and coffee grounds for formaldehyde, and other carbonyl compounds, with extremely promising results. The SPME sampling devices were also used for time- weighted average sampling (30 minutes to 16 hours). Finally, the four new SPME air sampling methods were field tested with side-by-side comparisons to standard air sampling methods, showing a tremendous use of SPME as an air sampler.
NASA Astrophysics Data System (ADS)
Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Kenyon, Garrett; Farrar, Charles; Mascareñas, David
2017-02-01
Experimental or operational modal analysis traditionally requires physically-attached wired or wireless sensors for vibration measurement of structures. This instrumentation can result in mass-loading on lightweight structures, and is costly and time-consuming to install and maintain on large civil structures, especially for long-term applications (e.g., structural health monitoring) that require significant maintenance for cabling (wired sensors) or periodic replacement of the energy supply (wireless sensors). Moreover, these sensors are typically placed at a limited number of discrete locations, providing low spatial sensing resolution that is hardly sufficient for modal-based damage localization, or model correlation and updating for larger-scale structures. Non-contact measurement methods such as scanning laser vibrometers provide high-resolution sensing capacity without the mass-loading effect; however, they make sequential measurements that require considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation, optical flow), video camera based measurements have been successfully used for vibration measurements and subsequent modal analysis, based on techniques such as the digital image correlation (DIC) and the point-tracking. However, they typically require speckle pattern or high-contrast markers to be placed on the surface of structures, which poses challenges when the measurement area is large or inaccessible. This work explores advanced computer vision and video processing algorithms to develop a novel video measurement and vision-based operational (output-only) modal analysis method that alleviate the need of structural surface preparation associated with existing vision-based methods and can be implemented in a relatively efficient and autonomous manner with little user supervision and calibration. First a multi-scale image processing method is applied on the frames of the video of a vibrating structure to extract the local pixel phases that encode local structural vibration, establishing a full-field spatiotemporal motion matrix. Then a high-spatial dimensional, yet low-modal-dimensional, over-complete model is used to represent the extracted full-field motion matrix using modal superposition, which is physically connected and manipulated by a family of unsupervised learning models and techniques, respectively. Thus, the proposed method is able to blindly extract modal frequencies, damping ratios, and full-field (as many points as the pixel number of the video frame) mode shapes from line of sight video measurements of the structure. The method is validated by laboratory experiments on a bench-scale building structure and a cantilever beam. Its ability for output (video measurements)-only identification and visualization of the weakly-excited mode is demonstrated and several issues with its implementation are discussed.
Purcell, C; Romijn, A R
2017-11-01
In 2016, 29% of pedestrians killed or seriously injured on the roads in Great Britain were under 15 years of age. Children with Developmental Coordination Disorder (DCD), a chronic disorder affecting the acquisition and execution of motor skills, may be more vulnerable at the roadside than typically developing (TD) children. Current methods used to teach road safety are typically knowledge-based and do not necessarily improve behaviour in real traffic situations. Virtual reality road crossing tasks may be a viable alternative. The present study aimed to test the road crossing accuracy of children with and without DCD in virtual reality tasks that varied the viewpoint to simulate the teaching methods currently used in road safety educational programmes. Twenty-one children with DCD and twenty-one age and gender matched TD peers were required to locate the safest road crossing sites in two conditions: allocentric (aerial viewpoint) and egocentric (first-person viewpoint). All children completed both conditions and were required to navigate either themselves or an avatar across the road using the safest crossing route. The primary outcome was accuracy defined as the number of trials, out of 10, on which the child successfully identified and used the safest crossing route. Children with DCD performed equally poorly in both conditions, while TD children were significantly more accurate in the egocentric condition. This difference cannot be explained by self-reported prior road crossing education, practice or confidence. While TD children may benefit from the development of an egocentric virtual reality road crossing task, multimodal methods may be needed to effectively teach road safety to children with DCD. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Multiple nodes transfer alignment for airborne missiles based on inertial sensor network
NASA Astrophysics Data System (ADS)
Si, Fan; Zhao, Yan
2017-09-01
Transfer alignment is an important initialization method for airborne missiles because the alignment accuracy largely determines the performance of the missile. However, traditional alignment methods are limited by complicated and unknown flexure angle, and cannot meet the actual requirement when wing flexure deformation occurs. To address this problem, we propose a new method that uses the relative navigation parameters between the weapons and fighter to achieve transfer alignment. First, in the relative inertial navigation algorithm, the relative attitudes and positions are constantly computed in wing flexure deformation situations. Secondly, the alignment results of each weapon are processed using a data fusion algorithm to improve the overall performance. Finally, the feasibility and performance of the proposed method were evaluated under two typical types of deformation, and the simulation results demonstrated that the new transfer alignment method is practical and has high-precision.
NASA Technical Reports Server (NTRS)
Zhang, Yiqiang; Alexander, J. I. D.; Ouazzani, J.
1994-01-01
Free and moving boundary problems require the simultaneous solution of unknown field variables and the boundaries of the domains on which these variables are defined. There are many technologically important processes that lead to moving boundary problems associated with fluid surfaces and solid-fluid boundaries. These include crystal growth, metal alloy and glass solidification, melting and name propagation. The directional solidification of semi-conductor crystals by the Bridgman-Stockbarger method is a typical example of such a complex process. A numerical model of this growth method must solve the appropriate heat, mass and momentum transfer equations and determine the location of the melt-solid interface. In this work, a Chebyshev pseudospectra collocation method is adapted to the problem of directional solidification. Implementation involves a solution algorithm that combines domain decomposition, finite-difference preconditioned conjugate minimum residual method and a Picard type iterative scheme.
NASA Technical Reports Server (NTRS)
Holt, James B.; Monk, Timothy S.
2009-01-01
Propellant Mass Fraction (pmf) calculation methods vary throughout the aerospace industry. While typically used as a means of comparison between candidate launch vehicle designs, the actual pmf calculation method varies slightly from one entity to another. It is the purpose of this paper to present various methods used to calculate the pmf of launch vehicles. This includes fundamental methods of pmf calculation that consider only the total propellant mass and the dry mass of the vehicle; more involved methods that consider the residuals, reserves and any other unusable propellant remaining in the vehicle; and calculations excluding large mass quantities such as the installed engine mass. Finally, a historical comparison is made between launch vehicles on the basis of the differing calculation methodologies, while the unique mission and design requirements of the Ares V Earth Departure Stage (EDS) are examined in terms of impact to pmf.
GPS/DR Error Estimation for Autonomous Vehicle Localization.
Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In
2015-08-21
Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level.
GPS/DR Error Estimation for Autonomous Vehicle Localization
Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In
2015-01-01
Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level. PMID:26307997
NASA Astrophysics Data System (ADS)
Mezgebo, Biniyam; Nagib, Karim; Fernando, Namal; Kordi, Behzad; Sherif, Sherif
2018-02-01
Swept Source optical coherence tomography (SS-OCT) is an important imaging modality for both medical and industrial diagnostic applications. A cross-sectional SS-OCT image is obtained by applying an inverse discrete Fourier transform (DFT) to axial interferograms measured in the frequency domain (k-space). This inverse DFT is typically implemented as a fast Fourier transform (FFT) that requires the data samples to be equidistant in k-space. As the frequency of light produced by a typical wavelength-swept laser is nonlinear in time, the recorded interferogram samples will not be uniformly spaced in k-space. Many image reconstruction methods have been proposed to overcome this problem. Most such methods rely on oversampling the measured interferogram then use either hardware, e.g., Mach-Zhender interferometer as a frequency clock module, or software, e.g., interpolation in k-space, to obtain equally spaced samples that are suitable for the FFT. To overcome the problem of nonuniform sampling in k-space without any need for interferogram oversampling, an earlier method demonstrated the use of the nonuniform discrete Fourier transform (NDFT) for image reconstruction in SS-OCT. In this paper, we present a more accurate method for SS-OCT image reconstruction from nonuniform samples in k-space using a scaled nonuniform Fourier transform. The result is demonstrated using SS-OCT images of Axolotl salamander eggs.
Roberts, Hannah M; Shiller, Alan M
2015-01-26
Methane (CH4) is the third most abundant greenhouse gas (GHG) but is vastly understudied in comparison to carbon dioxide. Sources and sinks to the atmosphere vary considerably in estimation, including sources such as fresh and marine water systems. A new method to determine dissolved methane concentrations in discrete water samples has been evaluated. By analyzing an equilibrated headspace using laser cavity ring-down spectroscopy (CRDS), low nanomolar dissolved methane concentrations can be determined with high reproducibility (i.e., 0.13 nM detection limit and typical 4% RSD). While CRDS instruments cost roughly twice that of gas chromatographs (GC) usually used for methane determination, the process presented herein is substantially simpler, faster, and requires fewer materials than GC methods. Typically, 70-mL water samples are equilibrated with an equivalent amount of zero air in plastic syringes. The equilibrated headspace is transferred to a clean, dry syringe and then drawn into a Picarro G2301 CRDS analyzer via the instrument's pump. We demonstrate that this instrument holds a linear calibration into the sub-ppmv methane concentration range and holds a stable calibration for at least two years. Application of the method to shipboard dissolved methane determination in the northern Gulf of Mexico as well as river water is shown. Concentrations spanning nearly six orders of magnitude have been determined with this method. Copyright © 2014 Elsevier B.V. All rights reserved.
Robust inference in summary data Mendelian randomization via the zero modal pleiotropy assumption.
Hartwig, Fernando Pires; Davey Smith, George; Bowden, Jack
2017-12-01
Mendelian randomization (MR) is being increasingly used to strengthen causal inference in observational studies. Availability of summary data of genetic associations for a variety of phenotypes from large genome-wide association studies (GWAS) allows straightforward application of MR using summary data methods, typically in a two-sample design. In addition to the conventional inverse variance weighting (IVW) method, recently developed summary data MR methods, such as the MR-Egger and weighted median approaches, allow a relaxation of the instrumental variable assumptions. Here, a new method - the mode-based estimate (MBE) - is proposed to obtain a single causal effect estimate from multiple genetic instruments. The MBE is consistent when the largest number of similar (identical in infinite samples) individual-instrument causal effect estimates comes from valid instruments, even if the majority of instruments are invalid. We evaluate the performance of the method in simulations designed to mimic the two-sample summary data setting, and demonstrate its use by investigating the causal effect of plasma lipid fractions and urate levels on coronary heart disease risk. The MBE presented less bias and lower type-I error rates than other methods under the null in many situations. Its power to detect a causal effect was smaller compared with the IVW and weighted median methods, but was larger than that of MR-Egger regression, with sample size requirements typically smaller than those available from GWAS consortia. The MBE relaxes the instrumental variable assumptions, and should be used in combination with other approaches in sensitivity analyses. © The Author 2017. Published by Oxford University Press on behalf of the International Epidemiological Association
Method of fabricating a uranium-bearing foil
Gooch, Jackie G [Seymour, TN; DeMint, Amy L [Kingston, TN
2012-04-24
Methods of fabricating a uranium-bearing foil are described. The foil may be substantially pure uranium, or may be a uranium alloy such as a uranium-molybdenum alloy. The method typically includes a series of hot rolling operations on a cast plate material to form a thin sheet. These hot rolling operations are typically performed using a process where each pass reduces the thickness of the plate by a substantially constant percentage. The sheet is typically then annealed and then cooled. The process typically concludes with a series of cold rolling passes where each pass reduces the thickness of the plate by a substantially constant thickness amount to form the foil.
Methods and apparatus for transparent display using scattering nanoparticles
Hsu, Chia Wei; Qiu, Wenjun; Zhen, Bo; Shapira, Ofer; Soljacic, Marin
2017-06-14
Transparent displays enable many useful applications, including heads-up displays for cars and aircraft as well as displays on eyeglasses and glass windows. Unfortunately, transparent displays made of organic light-emitting diodes are typically expensive and opaque. Heads-up displays often require fixed light sources and have limited viewing angles. And transparent displays that use frequency conversion are typically energy inefficient. Conversely, the present transparent displays operate by scattering visible light from resonant nanoparticles with narrowband scattering cross sections and small absorption cross sections. More specifically, projecting an image onto a transparent screen doped with nanoparticles that selectively scatter light at the image wavelength(s) yields an image on the screen visible to an observer. Because the nanoparticles scatter light at only certain wavelengths, the screen is practically transparent under ambient light. Exemplary transparent scattering displays can be simple, inexpensive, scalable to large sizes, viewable over wide angular ranges, energy efficient, and transparent simultaneously.
Methods and apparatus for transparent display using scattering nanoparticles
Hsu, Chia Wei; Qiu, Wenjun; Zhen, Bo; Shapira, Ofer; Soljacic, Marin
2016-05-10
Transparent displays enable many useful applications, including heads-up displays for cars and aircraft as well as displays on eyeglasses and glass windows. Unfortunately, transparent displays made of organic light-emitting diodes are typically expensive and opaque. Heads-up displays often require fixed light sources and have limited viewing angles. And transparent displays that use frequency conversion are typically energy inefficient. Conversely, the present transparent displays operate by scattering visible light from resonant nanoparticles with narrowband scattering cross sections and small absorption cross sections. More specifically, projecting an image onto a transparent screen doped with nanoparticles that selectively scatter light at the image wavelength(s) yields an image on the screen visible to an observer. Because the nanoparticles scatter light at only certain wavelengths, the screen is practically transparent under ambient light. Exemplary transparent scattering displays can be simple, inexpensive, scalable to large sizes, viewable over wide angular ranges, energy efficient, and transparent simultaneously.
Werner, William E; Wu, Sylvia; Mulkerrin, Michael
2005-07-01
Typically, the removal of pyroglutamate from the protein chains of immunoglobulins with the enzyme pyroglutamate aminopeptidase requires the use of chaotropic and reducing agents, quite often with limited success. This article describes a series of optimization experiments using elevated temperatures and detergents to denature and stabilize the heavy chains of immunoglobulins such that the pyroglutamate at the amino terminal was accessible to enzymatic removal using the thermostable protease isolated from Pyrococcus furiosus. The detergent polysorbate 20 (Tween 20) was used successfully to facilitate the removal of pyroglutamate residues. A one-step digestion was developed using elevated temperatures and polysorbate 20, rather than chaotropic and reducing agents, with sample cleanup and preparation for Edman sequencing performed using a commercial cartridge containing the PVDF membrane. All of the immunoglobulins digested with this method yielded heavy chain sequence, but the extent of deblocking was immunglobulin dependent (typically>50%).
Bias Reduction in Short Records of Satellite Soil Moisture
NASA Technical Reports Server (NTRS)
Reichle, Rolf H.; Koster, Randal D.
2004-01-01
Although surface soil moisture data from different sources (satellite retrievals, ground measurements, and land model integrations of observed meteorological forcing data) have been shown to contain consistent and useful information in their seasonal cycle and anomaly signals, they typically exhibit very different mean values and variability. These biases pose a severe obstacle to exploiting the useful information contained in satellite retrievals through data assimilation. A simple method of bias removal is to match the cumulative distribution functions (cdf) of the satellite and model data. However, accurate cdf estimation typically requires a long record of satellite data. We demonstrate here that by wing spatial sampling with a 2 degree moving window we can obtain local statistics based on a one-year satellite record that are a good approximation to those that would be derived from a much longer time series. This result should increase the usefulness of relatively short satellite data records.
A new method of real-time detection of changes in periodic data stream
NASA Astrophysics Data System (ADS)
Lyu, Chen; Lu, Guoliang; Cheng, Bin; Zheng, Xiangwei
2017-07-01
The change point detection in periodic time series is much desirable in many practical usages. We present a novel algorithm for this task, which includes two phases: 1) anomaly measure- on the basis of a typical regression model, we propose a new computation method to measure anomalies in time series which does not require any reference data from other measurement(s); 2) change detection- we introduce a new martingale test for detection which can be operated in an unsupervised and nonparametric way. We have conducted extensive experiments to systematically test our algorithm. The results make us believe that our algorithm can be directly applicable in many real-world change-point-detection applications.
Cross-beam coherence of infrasonic signals at local and regional ranges.
Alberts, W C Kirkpatrick; Tenney, Stephen M
2017-11-01
Signals collected by infrasound arrays require continuous analysis by skilled personnel or by automatic algorithms in order to extract useable information. Typical pieces of information gained by analysis of infrasonic signals collected by multiple sensor arrays are arrival time, line of bearing, amplitude, and duration. These can all be used, often with significant accuracy, to locate sources. A very important part of this chain is associating collected signals across multiple arrays. Here, a pairwise, cross-beam coherence method of signal association is described that allows rapid signal association for high signal-to-noise ratio events captured by multiple infrasound arrays at ranges exceeding 150 km. Methods, test cases, and results are described.
A Novel Method of Preparation of Inorganic Glasses by Microwave Irradiation
NASA Astrophysics Data System (ADS)
Vaidhyanathan, B.; Ganguli, Munia; Rao, K. J.
1994-12-01
Microwave heating is shown to provide an extremely facile and automatically temperature-controlled route to the synthesis of glasses. Glass-forming compositions of several traditional and novel glasses were melted in a kitchen microwave oven, typically within 5 min and quenched into glasses. This is only a fraction of the time required in normal glass preparation methods. The rapidity of melting minimizes undesirable features such as loss of components of the glass, variation of oxidation states of metal ions, and oxygen loss leading to reduced products in the glass such as metal particles. This novel procedure of preparation is applicable when at least one of the components of the glass-forming mixture absorbs microwaves.
Fibre optic connectors with high-return-loss performance
NASA Astrophysics Data System (ADS)
Knott, Michael P.; Johnson, R.; Cooke, K.; Longhurst, P. C.
1990-09-01
This paper describes the development of a single mode fibre optic connector with high return loss performance without the use of index matching. Partial reflection of incident light at a fibre optic connector interface is a recognised problem where the result can be increased noise and waveform distortion. This is particularly important for video transmission in subscriber networks which requires a high signal to noise ratio. A number of methods can be used to improve the return loss. The method described here uses a process which angles the connector endfaces. Measurements show typical return losses of -55dB can be achieved for an end angle of 6 degrees. Insertion loss results are also presented.
NASA Technical Reports Server (NTRS)
Rogallo, Vernon L; Yaggy, Paul F; Mccloud, John L , III
1956-01-01
A simplified procedure is shown for calculating the once-per-revolution oscillating aerodynamic thrust loads on propellers of tractor airplanes at zero yaw. The only flow field information required for the application of the procedure is a knowledge of the upflow angles at the horizontal center line of the propeller disk. Methods are presented whereby these angles may be computed without recourse to experimental survey of the flow field. The loads computed by the simplified procedure are compared with those computed by a more rigorous method and the procedure is applied to several airplane configurations which are believed typical of current designs. The results are generally satisfactory.
Deep Recurrent Neural Networks for Human Activity Recognition
Murad, Abdulmajid
2017-01-01
Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs. PMID:29113103
Deep Recurrent Neural Networks for Human Activity Recognition.
Murad, Abdulmajid; Pyun, Jae-Young
2017-11-06
Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.
Micro Machining of Injection Mold Inserts for Fluidic Channel of Polymeric Biochips
Jung, Woo-Chul; Heo, Young-Moo; Yoon, Gil-Sang; Shin, Kwang-Ho; Chang, Sung-Ho; Kim, Gun-Hee; Cho, Myeong-Woo
2007-01-01
Recently, the polymeric micro-fluidic biochip, often called LOC (lab-on-a-chip), has been focused as a cheap, rapid and simplified method to replace the existing biochemical laboratory works. It becomes possible to form miniaturized lab functionalities on a chip with the development of MEMS technologies. The micro-fluidic chips contain many micro-channels for the flow of sample and reagents, mixing, and detection tasks. Typical substrate materials for the chip are glass and polymers. Typical techniques for microfluidic chip fabrication are utilizing various micro pattern forming methods, such as wet-etching, micro-contact printing, and hot-embossing, micro injection molding, LIGA, and micro powder blasting processes, etc. In this study, to establish the basis of the micro pattern fabrication and mass production of polymeric micro-fluidic chips using injection molding process, micro machining method was applied to form micro-channels on the LOC molds. In the research, a series of machining experiments using micro end-mills were performed to determine optimum machining conditions to improve surface roughness and shape accuracy of designed simplified micro-channels. Obtained conditions were used to machine required mold inserts for micro-channels using micro end-mills. Test injection processes using machined molds and COC polymer were performed, and then the results were investigated.
Quantifying electrical impacts on redundant wire insertion in 7nm unidirectional designs
NASA Astrophysics Data System (ADS)
Mohyeldin, Ahmed; Schroeder, Uwe Paul; Srinivasan, Ramya; Narisetty, Haritez; Malik, Shobhit; Madhavan, Sriram
2017-04-01
In nano-meter scale Integrated Circuits, via fails due to random defects is a well-known yield detractor, and via redundancy insertion is a common method to help enhance semiconductors yield. For the case of Self Aligned Double Patterning (SADP), which might require unidirectional design layers as in the case of some advanced technology nodes, the conventional methods of inserting redundant vias don't work any longer. This is because adding redundant vias conventionally requires adding metal shapes in the non-preferred direction, which will violate the SADP design constraints in that case. Therefore, such metal layers fabricated using unidirectional SADP require an alternative method for providing the needed redundancy. This paper proposes a post-layout Design for Manufacturability (DFM) redundancy insertion method tailored for the design requirements introduced by unidirectional metal layers. The proposed method adds redundant wires in the preferred direction - after searching for nearby vacant routing tracks - in order to provide redundant paths for electrical signals. This method opportunistically adds robustness against failures due to silicon defects without impacting area or incurring new design rule violations. Implementation details of this redundancy insertion method will be explained in this paper. One known challenge with similar DFM layout fixing methods is the possible introduction of undesired electrical impact, causing other unintentional failures in design functionality. In this paper, a study is presented to quantify the electrical impacts of such redundancy insertion scheme and to examine if that electrical impact can be tolerated. The paper will show results to evaluate DFM insertion rates and corresponding electrical impact for a given design utilization and maximum inserted wire length. Parasitic extraction and static timing analysis results will be presented. A typical digital design implemented using GLOBALFOUNDRIES 7nm technology is used for demonstration. The provided results can help evaluate such extensive DFM insertion method from an electrical standpoint. Furthermore, the results could provide guidance on how to implement the proposed method of adding electrical redundancy such that intolerable electrical impacts could be avoided.
16 CFR Figure 5 to Part 1512 - Typical Handbrake Actuator Showing Grip Dimension
Code of Federal Regulations, 2010 CFR
2010-01-01
... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Typical Handbrake Actuator Showing Grip Dimension 5 Figure 5 to Part 1512 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Fig. 5 Figure 5 to Part 1512—Typical Handbrake Actuator Showing Grip Dimension...
16 CFR Figure 5 to Part 1512 - Typical Handbrake Actuator Showing Grip Dimension
Code of Federal Regulations, 2011 CFR
2011-01-01
... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Typical Handbrake Actuator Showing Grip Dimension 5 Figure 5 to Part 1512 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Fig. 5 Figure 5 to Part 1512—Typical Handbrake Actuator Showing Grip Dimension...
Collaborative Problem Solving in Young Typical Development and HFASD
ERIC Educational Resources Information Center
Kimhi, Yael; Bauminger-Zviely, Nirit
2012-01-01
Collaborative problem solving (CPS) requires sharing goals/attention and coordinating actions--all deficient in HFASD. Group differences were examined in CPS (HFASD/typical), with a friend versus with a non-friend. Participants included 28 HFASD and 30 typical children aged 3-6 years and their 58 friends and 58 non-friends. Groups were matched on…
16 CFR Figure 5 to Part 1512 - Typical Handbrake Actuator Showing Grip Dimension
Code of Federal Regulations, 2013 CFR
2013-01-01
... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Typical Handbrake Actuator Showing Grip Dimension 5 Figure 5 to Part 1512 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Fig. 5 Figure 5 to Part 1512—Typical Handbrake Actuator Showing Grip Dimension...
16 CFR Figure 5 to Part 1512 - Typical Handbrake Actuator Showing Grip Dimension
Code of Federal Regulations, 2012 CFR
2012-01-01
... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Typical Handbrake Actuator Showing Grip Dimension 5 Figure 5 to Part 1512 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Fig. 5 Figure 5 to Part 1512—Typical Handbrake Actuator Showing Grip Dimension...
16 CFR Figure 5 to Part 1512 - Typical Handbrake Actuator Showing Grip Dimension
Code of Federal Regulations, 2014 CFR
2014-01-01
... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Typical Handbrake Actuator Showing Grip Dimension 5 Figure 5 to Part 1512 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Fig. 5 Figure 5 to Part 1512—Typical Handbrake Actuator Showing Grip Dimension...
14 CFR Appendix C to Part 1215 - Typical User Activity Timeline
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Typical User Activity Timeline C Appendix C... RELAY SATELLITE SYSTEM (TDRSS) Pt. 1215, App. C Appendix C to Part 1215—Typical User Activity Timeline... mission model. 3 years before launch (Ref. § 1215.109(c). Submit general user requirements to permit...
Beck, H J; Birch, G F
2013-06-01
Stormwater contaminant loading estimates using event mean concentration (EMC), rainfall/runoff relationship calculations and computer modelling (Model of Urban Stormwater Infrastructure Conceptualisation--MUSIC) demonstrated high variability in common methods of water quality assessment. Predictions of metal, nutrient and total suspended solid loadings for three highly urbanised catchments in Sydney estuary, Australia, varied greatly within and amongst methods tested. EMC and rainfall/runoff relationship calculations produced similar estimates (within 1 SD) in a statistically significant number of trials; however, considerable variability within estimates (∼50 and ∼25 % relative standard deviation, respectively) questions the reliability of these methods. Likewise, upper and lower default inputs in a commonly used loading model (MUSIC) produced an extensive range of loading estimates (3.8-8.3 times above and 2.6-4.1 times below typical default inputs, respectively). Default and calibrated MUSIC simulations produced loading estimates that agreed with EMC and rainfall/runoff calculations in some trials (4-10 from 18); however, they were not frequent enough to statistically infer that these methods produced the same results. Great variance within and amongst mean annual loads estimated by common methods of water quality assessment has important ramifications for water quality managers requiring accurate estimates of the quantities and nature of contaminants requiring treatment.
Application of PLE for the determination of essential oil components from Thymus vulgaris L.
Dawidowicz, Andrzej L; Rado, Ewelina; Wianowska, Dorota; Mardarowicz, Marek; Gawdzik, Jan
2008-08-15
Essential plants, due to their long presence in human history, their status in culinary arts, their use in medicine and perfume manufacture, belong to frequently examined stock materials in scientific and industrial laboratories. Because of a large number of freshly cut, dried or frozen plant samples requiring the determination of essential oil amount and composition, a fast, safe, simple, efficient and highly automatic sample preparation method is needed. Five sample preparation methods (steam distillation, extraction in the Soxhlet apparatus, supercritical fluid extraction, solid phase microextraction and pressurized liquid extraction) used for the isolation of aroma-active components from Thymus vulgaris L. are compared in the paper. The methods are mainly discussed with regard to the recovery of components which typically exist in essential oil isolated by steam distillation. According to the obtained data, PLE is the most efficient sample preparation method in determining the essential oil from the thyme herb. Although co-extraction of non-volatile ingredients is the main drawback of this method, it is characterized by the highest yield of essential oil components and the shortest extraction time required. Moreover, the relative peak amounts of essential components revealed by PLE are comparable with those obtained by steam distillation, which is recognized as standard sample preparation method for the analysis of essential oils in aromatic plants.
Manufacturing PDMS micro lens array using spin coating under a multiphase system
NASA Astrophysics Data System (ADS)
Sun, Rongrong; Yang, Hanry; Rock, D. Mitchell; Danaei, Roozbeh; Panat, Rahul; Kessler, Michael R.; Li, Lei
2017-05-01
The development of micro lens arrays has garnered much interest due to increased demand of miniaturized systems. Traditional methods for manufacturing micro lens arrays have several shortcomings. For example, they require expensive facilities and long lead time, and traditional lens materials (i.e. glass) are typically heavy, costly and difficult to manufacture. In this paper, we explore a method for manufacturing a polydimethylsiloxane (PDMS) micro lens array using a simple spin coating technique. The micro lens array, formed under an interfacial tension dominated system, and the influence of material properties and process parameters on the fabricated lens shape are examined. The lenses fabricated using this method show comparable optical properties—including surface finish and image quality—with a reduced cost and manufacturing lead time.
Yang, Ming; Allard, Lawrence F; Flytzani-Stephanopoulos, Maria
2013-03-13
We report a new method for stabilizing appreciable loadings (~1 wt %) of isolated gold atoms on titania and show that these catalyze the low-temperature water-gas shift reaction. The method combines a typical gold deposition/precipitation method with UV irradiation of the titania support suspended in ethanol. Dissociation of H2O on the thus-created Au-O-TiO(x) sites is facile. At higher gold loadings, nanoparticles are formed, but they were shown to add no further activity to the atomically bound gold on titania. Removal of this "excess" gold by sodium cyanide leaching leaves the activity intact and the atomically dispersed gold still bound on titania. The new materials may catalyze a number of other reactions that require oxidized active metal sites.
Integrated optics to improve resolution on multiple configuration
NASA Astrophysics Data System (ADS)
Liu, Hua; Ding, Quanxin; Guo, Chunjie; Zhou, Liwei
2015-04-01
Inspired to in order to reveal the structure to improve imaging resolution, further technical requirement is proposed in some areas of the function and influence on the development of multiple configuration. To breakthrough diffraction limit, smart structures are recommended as the most efficient and economical method, while by used to improve the system performance, especially on signal to noise ratio and resolution. Integrated optics were considered in the selection, with which typical multiple configuration, by use the method of simulation experiment. Methodology can change traditional design concept and to develop the application space. Our calculations using multiple matrix transfer method, also the correlative algorithm and full calculations, show the expected beam shaping through system and, in particular, the experimental results will support our argument, which will be reported in the presentation.
Comparison of texture synthesis methods for content generation in ultrasound simulation for training
NASA Astrophysics Data System (ADS)
Mattausch, Oliver; Ren, Elizabeth; Bajka, Michael; Vanhoey, Kenneth; Goksel, Orcun
2017-03-01
Navigation and interpretation of ultrasound (US) images require substantial expertise, the training of which can be aided by virtual-reality simulators. However, a major challenge in creating plausible simulated US images is the generation of realistic ultrasound speckle. Since typical ultrasound speckle exhibits many properties of Markov Random Fields, it is conceivable to use texture synthesis for generating plausible US appearance. In this work, we investigate popular classes of texture synthesis methods for generating realistic US content. In a user study, we evaluate their performance for reproducing homogeneous tissue regions in B-mode US images from small image samples of similar tissue and report the best-performing synthesis methods. We further show that regression trees can be used on speckle texture features to learn a predictor for US realism.
NASA Technical Reports Server (NTRS)
1972-01-01
The development of low-profile flat conductor cable (FCC) connecting device and FCC permanent splice methods are discussed. The design goal for the low-profile connecting device was to mate and unmate FCC harness to a typical spacecraft component with a maximum height of 3/8 in. The results indicate that the design, fabrication, and processing of the low-profile connecting device are feasible and practical. Some redesign will be required to achieve the goal of 3/8 in. Also, failures were experienced subsequent to salt spray and humidity exposure. Five different FCC permanent splice methods were considered. Subsequent to evaluation of these five methods, two design concepts were chosen for development tests.
Assessment and Validation of Machine Learning Methods for Predicting Molecular Atomization Energies.
Hansen, Katja; Montavon, Grégoire; Biegler, Franziska; Fazli, Siamac; Rupp, Matthias; Scheffler, Matthias; von Lilienfeld, O Anatole; Tkatchenko, Alexandre; Müller, Klaus-Robert
2013-08-13
The accurate and reliable prediction of properties of molecules typically requires computationally intensive quantum-chemical calculations. Recently, machine learning techniques applied to ab initio calculations have been proposed as an efficient approach for describing the energies of molecules in their given ground-state structure throughout chemical compound space (Rupp et al. Phys. Rev. Lett. 2012, 108, 058301). In this paper we outline a number of established machine learning techniques and investigate the influence of the molecular representation on the methods performance. The best methods achieve prediction errors of 3 kcal/mol for the atomization energies of a wide variety of molecules. Rationales for this performance improvement are given together with pitfalls and challenges when applying machine learning approaches to the prediction of quantum-mechanical observables.
NASA Technical Reports Server (NTRS)
Evans, Keith D.; Demoz, Belay B.; Cadirola, Martin P.; Melfi, S. H.; Whiteman, David N.; Schwemmer, Geary K.; Starr, David OC.; Schmidlin, F. J.; Feltz, Wayne
2000-01-01
The NAcA/Goddard Space Flight Center Scanning Raman Lidar has made measurements of water vapor and aerosols for almost ten years. Calibration of the water vapor data has typically been performed by comparison with another water vapor sensor such as radiosondes. We present a new method for water vapor calibration that only requires low clouds, and surface pressure and temperature measurements. A sensitivity study was performed and the cloud base algorithm agrees with the radiosonde calibration to within 10- 15%. Knowledge of the true atmospheric lapse rate is required to obtain more accurate cloud base temperatures. Analysis of water vapor and aerosol measurements made in the vicinity of Hurricane Bonnie are discussed.
Immersed boundary-simplified lattice Boltzmann method for incompressible viscous flows
NASA Astrophysics Data System (ADS)
Chen, Z.; Shu, C.; Tan, D.
2018-05-01
An immersed boundary-simplified lattice Boltzmann method is developed in this paper for simulations of two-dimensional incompressible viscous flows with immersed objects. Assisted by the fractional step technique, the problem is resolved in a predictor-corrector scheme. The predictor step solves the flow field without considering immersed objects, and the corrector step imposes the effect of immersed boundaries on the velocity field. Different from the previous immersed boundary-lattice Boltzmann method which adopts the standard lattice Boltzmann method (LBM) as the flow solver in the predictor step, a recently developed simplified lattice Boltzmann method (SLBM) is applied in the present method to evaluate intermediate flow variables. Compared to the standard LBM, SLBM requires lower virtual memories, facilitates the implementation of physical boundary conditions, and shows better numerical stability. The boundary condition-enforced immersed boundary method, which accurately ensures no-slip boundary conditions, is implemented as the boundary solver in the corrector step. Four typical numerical examples are presented to demonstrate the stability, the flexibility, and the accuracy of the present method.
NASA Astrophysics Data System (ADS)
Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.
2015-12-01
For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical of (2) and the global averaged methods typical of (3) compare for typical systems? The discussion will use examples of response of the Greenland glacier to global warming and surface and groundwater modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Paul T.; Shadid, John N.; Tsuji, Paul H.
Here, this study explores the performance and scaling of a GMRES Krylov method employed as a smoother for an algebraic multigrid (AMG) preconditioned Newton- Krylov solution approach applied to a fully-implicit variational multiscale (VMS) nite element (FE) resistive magnetohydrodynamics (MHD) formulation. In this context a Newton iteration is used for the nonlinear system and a Krylov (GMRES) method is employed for the linear subsystems. The efficiency of this approach is critically dependent on the scalability and performance of the AMG preconditioner for the linear solutions and the performance of the smoothers play a critical role. Krylov smoothers are considered inmore » an attempt to reduce the time and memory requirements of existing robust smoothers based on additive Schwarz domain decomposition (DD) with incomplete LU factorization solves on each subdomain. Three time dependent resistive MHD test cases are considered to evaluate the method. The results demonstrate that the GMRES smoother can be faster due to a decrease in the preconditioner setup time and a reduction in outer GMRESR solver iterations, and requires less memory (typically 35% less memory for global GMRES smoother) than the DD ILU smoother.« less
State analysis requirements database for engineering complex embedded systems
NASA Technical Reports Server (NTRS)
Bennett, Matthew B.; Rasmussen, Robert D.; Ingham, Michel D.
2004-01-01
It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer's intent, potentially leading to software errors. This problem is addressed by a systems engineering tool called the State Analysis Database, which provides a tool for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using the State Analysis Database.
Study for identification of Beneficial uses of Space (BUS). Volume 3: Appendices
NASA Technical Reports Server (NTRS)
1975-01-01
The quantification of required specimen(s) from space processing experiments, the typical EMI measurements and estimates of a typical RF source, and the integration of commercial payloads into spacelab were considered.
Two-dimensional analytic weighting functions for limb scattering
NASA Astrophysics Data System (ADS)
Zawada, D. J.; Bourassa, A. E.; Degenstein, D. A.
2017-10-01
Through the inversion of limb scatter measurements it is possible to obtain vertical profiles of trace species in the atmosphere. Many of these inversion methods require what is often referred to as weighting functions, or derivatives of the radiance with respect to concentrations of trace species in the atmosphere. Several radiative transfer models have implemented analytic methods to calculate weighting functions, alleviating the computational burden of traditional numerical perturbation methods. Here we describe the implementation of analytic two-dimensional weighting functions, where derivatives are calculated relative to atmospheric constituents in a two-dimensional grid of altitude and angle along the line of sight direction, in the SASKTRAN-HR radiative transfer model. Two-dimensional weighting functions are required for two-dimensional inversions of limb scatter measurements. Examples are presented where the analytic two-dimensional weighting functions are calculated with an underlying one-dimensional atmosphere. It is shown that the analytic weighting functions are more accurate than ones calculated with a single scatter approximation, and are orders of magnitude faster than a typical perturbation method. Evidence is presented that weighting functions for stratospheric aerosols calculated under a single scatter approximation may not be suitable for use in retrieval algorithms under solar backscatter conditions.
Tower Based Load Measurements for Individual Pitch Control and Tower Damping of Wind Turbines
NASA Astrophysics Data System (ADS)
Kumar, A. A.; Hugues-Salas, O.; Savini, B.; Keogh, W.
2016-09-01
The cost of IPC has hindered adoption outside of Europe despite significant loading advantages for large wind turbines. In this work we presented a method for applying individual pitch control (including for higher-harmonics) using tower-top strain gauge feedback instead of blade-root strain gauge feedback. Tower-top strain gauges offer hardware savings of approximately 50% in addition to the possibility of easier access for maintenance and installation and requiring a less specialised skill-set than that required for applying strain gauges to composite blade roots. A further advantage is the possibility of using the same tower-top sensor array for tower damping control. This method is made possible by including a second order IPC loop in addition to the tower damping loop to reduce the typically dominating 3P content in tower-top load measurements. High-fidelity Bladed simulations show that the resulting turbine spectral characteristics from tower-top feedback IPC and from the combination of tower-top IPC and damping loops largely match those of blade-root feedback IPC and nacelle- velocity feedback damping. Lifetime weighted fatigue analysis shows that the methods allows load reductions within 2.5% of traditional methods.
NASA Astrophysics Data System (ADS)
Mercier, Sylvain; Gratton, Serge; Tardieu, Nicolas; Vasseur, Xavier
2017-12-01
Many applications in structural mechanics require the numerical solution of sequences of linear systems typically issued from a finite element discretization of the governing equations on fine meshes. The method of Lagrange multipliers is often used to take into account mechanical constraints. The resulting matrices then exhibit a saddle point structure and the iterative solution of such preconditioned linear systems is considered as challenging. A popular strategy is then to combine preconditioning and deflation to yield an efficient method. We propose an alternative that is applicable to the general case and not only to matrices with a saddle point structure. In this approach, we consider to update an existing algebraic or application-based preconditioner, using specific available information exploiting the knowledge of an approximate invariant subspace or of matrix-vector products. The resulting preconditioner has the form of a limited memory quasi-Newton matrix and requires a small number of linearly independent vectors. Numerical experiments performed on three large-scale applications in elasticity highlight the relevance of the new approach. We show that the proposed method outperforms the deflation method when considering sequences of linear systems with varying matrices.
Determination of Oxytetracycline from Salmon Muscle and Skin by Derivative Spectrophotometry.
Toral, M Inés; Sabay, Tamara; Orellana, Sandra L; Richter, Pablo
2015-01-01
A method was developed for the identification and quantification of oxytetracycline residues present in salmon muscle and skin using UV-Vis derivative spectrophotometry. With this method, it was possible to reduce the number of steps in the procedure typically required for instrumental analysis of a sample. The spectral variables, order of the derivative, scale factor, smoothing factor, and analytical wavelength were optimized using standard solutions of oxytetracycline dissolved in 900 mg/L oxalic acid in methanol. The matrix effect was significant; therefore, quantification for oxytetracycline residues was carried out using drug-free salmon muscle and skin samples fortified with oxytetracycline. The LOD and LOQ were found to be 271 and 903 μg/kg, respectively. The precision and accuracy of the method were validated using drug-free salmon muscle and skin tissues fortified at three different concentrations (8, 16, and 32 mg/kg) on 3 different days. The recoveries at all fortified concentrations were between 90 and 105%, and RSDs in all cases were less than 6.5%. This method can be used to screen out compliant samples and thereby reduce the number of suspect positive samples that will require further confirmatory analysis.
An Overview of Computational Aeroacoustic Modeling at NASA Langley
NASA Technical Reports Server (NTRS)
Lockard, David P.
2001-01-01
The use of computational techniques in the area of acoustics is known as computational aeroacoustics and has shown great promise in recent years. Although an ultimate goal is to use computational simulations as a virtual wind tunnel, the problem is so complex that blind applications of traditional algorithms are typically unable to produce acceptable results. The phenomena of interest are inherently unsteady and cover a wide range of frequencies and amplitudes. Nonetheless, with appropriate simplifications and special care to resolve specific phenomena, currently available methods can be used to solve important acoustic problems. These simulations can be used to complement experiments, and often give much more detailed information than can be obtained in a wind tunnel. The use of acoustic analogy methods to inexpensively determine far-field acoustics from near-field unsteadiness has greatly reduced the computational requirements. A few examples of current applications of computational aeroacoustics at NASA Langley are given. There remains a large class of problems that require more accurate and efficient methods. Research to develop more advanced methods that are able to handle the geometric complexity of realistic problems using block-structured and unstructured grids are highlighted.
Metal- and additive-free photoinduced borylation of haloarenes.
Mfuh, Adelphe M; Schneider, Brett D; Cruces, Westley; Larionov, Oleg V
2017-03-01
Boronic acids and esters have critical roles in the areas of synthetic organic chemistry, molecular sensors, materials science, drug discovery, and catalysis. Many of the current applications of boronic acids and esters require materials with very low levels of transition metal contamination. Most of the current methods for the synthesis of boronic acids, however, require transition metal catalysts and ligands that must be removed via additional purification procedures. This protocol describes a simple, metal- and additive-free method of conversion of haloarenes directly to boronic acids and esters. This photoinduced borylation protocol does not require expensive and toxic metal catalysts or ligands, and it produces innocuous and easy-to-remove by-products. Furthermore, the reaction can be carried out on multigram scales in common-grade solvents without the need for reaction mixtures to be deoxygenated. The setup and purification steps are typically accomplished within 1-3 h. The reactions can be run overnight, and the protocol can be completed within 13-16 h. Two representative procedures that are described in this protocol provide details for preparation of a boronic acid (3-cyanopheylboronic acid) and a boronic ester (1,4-benzenediboronic acid bis(pinacol)ester). We also discuss additional details of the method that will be helpful in the application of the protocol to other haloarene substrates.
Climate change impact on growing degree day accumulation values
NASA Astrophysics Data System (ADS)
Bekere, Liga; Sile, Tija; Bethers, Uldis; Sennikovs, Juris
2015-04-01
A well-known and often used method to assess and forecast plant growth cycle is the growing degree day (GDD) method with different formulas used for accumulation calculations. With this method the only factor that affects plant development is temperature. So with climate change and therefore also change in temperature the typical times of plant blooming or harvest can be expected to change. The goal of this study is to assess this change in the Northern Europe region. As an example strawberry bloom and harvest times are used. As the first part of this study it was required to define the current GDD amounts required for strawberry bloom and harvest. It was done using temperature data from the Danish Meteorological Institute's (DMI) NWP model HIRLAM for the years 2010-2012 and general strawberry growth observations in Latvia. This way we acquired an example amount of GDD required for strawberry blooming and harvest. To assess change in the plant growth cycle we used regional climate models (RCM) - Euro-CORDEX. RCM temperature data for both past and future periods was analyzed and bias correction was carried out. Then the GDD calculation methodology was applied on corrected temperature data and results showing change in strawberry growth cycle - bloom and harvest times - in Northern Europe were visualized.
Probabilistic Reinforcement Learning in Adults with Autism Spectrum Disorders
Solomon, Marjorie; Smith, Anne C.; Frank, Michael J.; Ly, Stanford; Carter, Cameron S.
2017-01-01
Background Autism spectrum disorders (ASDs) can be conceptualized as disorders of learning, however there have been few experimental studies taking this perspective. Methods We examined the probabilistic reinforcement learning performance of 28 adults with ASDs and 30 typically developing adults on a task requiring learning relationships between three stimulus pairs consisting of Japanese characters with feedback that was valid with different probabilities (80%, 70%, and 60%). Both univariate and Bayesian state–space data analytic methods were employed. Hypotheses were based on the extant literature as well as on neurobiological and computational models of reinforcement learning. Results Both groups learned the task after training. However, there were group differences in early learning in the first task block where individuals with ASDs acquired the most frequently accurately reinforced stimulus pair (80%) comparably to typically developing individuals; exhibited poorer acquisition of the less frequently reinforced 70% pair as assessed by state–space learning curves; and outperformed typically developing individuals on the near chance (60%) pair. Individuals with ASDs also demonstrated deficits in using positive feedback to exploit rewarded choices. Conclusions Results support the contention that individuals with ASDs are slower learners. Based on neurobiology and on the results of computational modeling, one interpretation of this pattern of findings is that impairments are related to deficits in flexible updating of reinforcement history as mediated by the orbito-frontal cortex, with spared functioning of the basal ganglia. This hypothesis about the pathophysiology of learning in ASDs can be tested using functional magnetic resonance imaging. PMID:21425243
NASA Astrophysics Data System (ADS)
Islam, Ariful; Tedford, Des
2012-08-01
The smooth running of small and medium-sized manufacturing enterprises (SMEs) presents a significant challenge irrespective of the technological and human resources they may have at their disposal. SMEs continuously encounter daily internal and external undesirable events and unwanted setbacks to their operations that detract from their business performance. These are referred to as `disturbances' in our research study. Among the disturbances, some are likely to create risks to the enterprises in terms of loss of production, manufacturing capability, human resource, market share, and, of course, economic losses. These are finally referred to as `risk determinant' on the basis of their correlation with some risk indicators, which are linked to operational, occupational, and economic risks. To deal with these risk determinants effectively, SMEs need a systematic method of approach to identify and treat their potential effects along with an appropriate set of tools. However, initially, a strategic approach is required to identify typical risk determinants and their linkage with potential business risks. In this connection, we conducted this study to explore the answer to the research question: what are the typical risk determinants encountered by SMEs? We carried out an empirical investigation with a multi-method research approach (a combination of a questionnaire-based mail survey involving 212 SMEs and five in-depth case studies) in New Zealand. This paper presents a set of typical internal and external risk determinants, which need special attention to be dealt with to minimize operational risks of an SME.
Block iterative restoration of astronomical images with the massively parallel processor
NASA Technical Reports Server (NTRS)
Heap, Sara R.; Lindler, Don J.
1987-01-01
A method is described for algebraic image restoration capable of treating astronomical images. For a typical 500 x 500 image, direct algebraic restoration would require the solution of a 250,000 x 250,000 linear system. The block iterative approach is used to reduce the problem to solving 4900 121 x 121 linear systems. The algorithm was implemented on the Goddard Massively Parallel Processor, which can solve a 121 x 121 system in approximately 0.06 seconds. Examples are shown of the results for various astronomical images.
Detection of picosecond electrical pulses using the intrinsic Franz{endash}Keldysh effect
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lampin, J. F.; Desplanque, L.; Mollot, F.
2001-06-25
We report time-resolved measurements of ultrafast electrical pulses propagating on a coplanar transmission line using the intrinsic Franz{endash}Keldysh effect. A low-temperature-grown GaAs layer deposited on a GaAs substrate allows generation and also detection of ps pulses via electroabsorption sampling (EAS). This all-optical method does not require any external sampling probe. A typical rise time of 1.1 ps has been measured. EAS is a good candidate for use in THz characterization of ultrafast devices. {copyright} 2001 American Institute of Physics.
NASA Astrophysics Data System (ADS)
Cerchiari, G.; Croccolo, F.; Cardinaux, F.; Scheffold, F.
2012-10-01
We present an implementation of the analysis of dynamic near field scattering (NFS) data using a graphics processing unit. We introduce an optimized data management scheme thereby limiting the number of operations required. Overall, we reduce the processing time from hours to minutes, for typical experimental conditions. Previously the limiting step in such experiments, the processing time is now comparable to the data acquisition time. Our approach is applicable to various dynamic NFS methods, including shadowgraph, Schlieren and differential dynamic microscopy.
Recursive sequences in first-year calculus
NASA Astrophysics Data System (ADS)
Krainer, Thomas
2016-02-01
This article provides ready-to-use supplementary material on recursive sequences for a second-semester calculus class. It equips first-year calculus students with a basic methodical procedure based on which they can conduct a rigorous convergence or divergence analysis of many simple recursive sequences on their own without the need to invoke inductive arguments as is typically required in calculus textbooks. The sequences that are accessible to this kind of analysis are predominantly (eventually) monotonic, but also certain recursive sequences that alternate around their limit point as they converge can be considered.
Rapid regulation of nuclear proteins by rapamycin-induced translocation in fission yeast
Ding, Lin; Laor, Dana; Weisman, Ronit; Forsburg, Susan L
2014-01-01
Genetic analysis of protein function requires a rapid means of inactivating the gene under study. Typically this exploits temperature sensitive mutations, or promoter shut-off techniques. We report the adaptation to Schizosaccharomyces pombe of the Anchor Away technique, originally designed in budding yeast (Haruki et al., 2008a). This method relies on a rapamycin-mediated interaction between the FRB and FKBP12 binding domains, to relocalize nuclear proteins of interest to the cytoplasm. We demonstrate a rapid nuclear depletion of abundant proteins as proof-of-principle. PMID:24733494
Supercritical Fluid Technologies to Fabricate Proliposomes.
Falconer, James R; Svirskis, Darren; Adil, Ali A; Wu, Zimei
2015-01-01
Proliposomes are stable drug carrier systems designed to form liposomes upon addition of an aqueous phase. In this review, current trends in the use of supercritical fluid (SCF) technologies to prepare proliposomes are discussed. SCF methods are used in pharmaceutical research and industry to address limitations associated with conventional methods of pro/liposome fabrication. The SCF solvent methods of proliposome preparation are eco-friendly (known as green technology) and, along with the SCF anti-solvent methods, could be advantageous over conventional methods; enabling better design of particle morphology (size and shape). The major hurdles of SCF methods include poor scalability to industrial manufacturing which may result in variable particle characteristics. In the case of SCF anti-solvent methods, another hurdle is the reliance on organic solvents. However, the amount of solvent required is typically less than that used by the conventional methods. Another hurdle is that most of the SCF methods used have complicated manufacturing processes, although once the setup has been completed, SCF technologies offer a single-step process in the preparation of proliposomes compared to the multiple steps required by many other methods. Furthermore, there is limited research into how proliposomes will be converted into liposomes for the end-user, and how such a product can be prepared reproducibly in terms of vesicle size and drug loading. These hurdles must be overcome and with more research, SCF methods, especially where the SCF acts as a solvent, have the potential to offer a strong alternative to the conventional methods to prepare proliposomes.
Method of making gas diffusion layers for electrochemical cells
Frisk, Joseph William; Boand, Wayne Meredith; Larson, James Michael
2002-01-01
A method is provided for making a gas diffusion layer for an electrochemical cell comprising the steps of: a) combining carbon particles and one or more surfactants in a typically aqueous vehicle to make a preliminary composition, typically by high shear mixing; b) adding one or more highly fluorinated polymers to said preliminary composition by low shear mixing to make a coating composition; and c) applying the coating composition to an electrically conductive porous substrate, typically by a low shear coating method.
An improved UHPLC-UV method for separation and quantification of carotenoids in vegetable crops.
Maurer, Megan M; Mein, Jonathan R; Chaudhuri, Swapan K; Constant, Howard L
2014-12-15
Carotenoid identification and quantitation is critical for the development of improved nutrition plant varieties. Industrial analysis of carotenoids is typically carried out on multiple crops with potentially thousands of samples per crop, placing critical needs on speed and broad utility of the analytical methods. Current chromatographic methods for carotenoid analysis have had limited industrial application due to their low throughput, requiring up to 60 min for complete separation of all compounds. We have developed an improved UHPLC-UV method that resolves all major carotenoids found in broccoli (Brassica oleracea L. var. italica), carrot (Daucus carota), corn (Zea mays), and tomato (Solanum lycopersicum). The chromatographic method is completed in 13.5 min allowing for the resolution of the 11 carotenoids of interest, including the structural isomers lutein/zeaxanthin and α-/β-carotene. Additional minor carotenoids have also been separated and identified with this method, demonstrating the utility of this method across major commercial food crops. Copyright © 2014 Elsevier Ltd. All rights reserved.
Creation of digital contours that approach the characteristics of cartographic contours
Tyler, Dean J.; Greenlee, Susan K.
2012-01-01
The capability to easily create digital contours using commercial off-the-shelf (COTS) software has existed for decades. Out-of-the-box raw contours are suitable for many scientific applications without pre- or post-processing; however, cartographic applications typically require additional improvements. For example, raw contours generally require smoothing before placement on a map. Cartographic contours must also conform to certain spatial/logical rules; for example, contours may not cross waterbodies. The objective was to create contours that match as closely as possible the cartographic contours produced by manual methods on the 1:24,000-scale, 7.5-minute Topographic Map series. This report outlines the basic approach, describes a variety of problems that were encountered, and discusses solutions. Many of the challenges described herein were the result of imperfect input raster elevation data and the requirement to have the contours integrated with hydrographic features from the National Hydrography Dataset (NHD).
Kweon, Meera; Slade, Louise; Levine, Harry; Gannon, Diane
2014-01-01
The many differences between cookie- and cracker-baking are discussed and described in terms of the functionality, and functional requirements, of the major biscuit ingredients--flour and sugar. Both types of products are similar in their major ingredients, but different in their formulas and processes. One of the most important and consequential differences between traditional cracker and cookie formulas is sugar (i.e., sucrose) concentration: usually lower than 30% in a typical cracker formula and higher than 30% in a typical cookie formula. Gluten development is facilitated in lower-sugar cracker doughs during mixing and sheeting; this is a critical factor linked to baked-cracker quality. Therefore, soft wheat flours with greater gluten quality and strength are typically preferred for cracker production. In contrast, the concentrated aqueous sugar solutions existing in high-sugar cookie doughs generally act as an antiplasticizer, compared with water alone, so gluten development during dough mixing and starch gelatinization/pasting during baking are delayed or prevented in most cookie systems. Traditional cookies and crackers are low-moisture baked goods, which are desirably made from flours with low water absorption [low water-holding capacity (WHC)], and low levels of damaged starch and water-soluble pentosans (i.e., water-accessible arabinoxylans). Rheological (e.g., alveography) and baking tests are often used to evaluate flour quality for baked-goods applications, but the solvent retention capacity (SRC) method (AACC 56-11) is a better diagnostic tool for predicting the functional contribution of each individual flour functional component, as well as the overall functionality of flours for cookie- and/or cracker-baking.
Who Governs Federally Qualified Health Centers?
Wright, Brad
2017-01-01
To make them more responsive to their community’s needs, federally qualified health centers (FQHCs) are required to have a governing board comprised of at least 51% consumers. However, the extent to which consumer board members actually resemble the typical FQHC patient has not been assessed, which according to the political science literature on representation may influence the board’s ability to represent the community. This mixed-methods study uses four years of data from the Health Resources and Services Administration, combined with Uniform Data System, Bureau of Labor Statistics, and Area Resource File data to describe and identify factors associated with the composition of FQHC governing boards. Board members are classified into one of three groups: non-consumers, non-representative consumers (who do not resemble the typical FQHC patient), and representative consumers (who resemble the typical FQHC patient). The analysis finds that a minority of board members are representative consumers, and telephone interviews with a stratified random sample of 30 FQHC board members confirmed the existence of significant socioeconomic gaps between consumer board members and FQHC patients. This may make FQHCs less responsive to the needs of the predominantly low-income communities they serve. PMID:23052684
Why Engineers Should Consider Formal Methods
NASA Technical Reports Server (NTRS)
Holloway, C. Michael
1997-01-01
This paper presents a logical analysis of a typical argument favoring the use of formal methods for software development, and suggests an alternative argument that is simpler and stronger than the typical one.
Two-Color Nonlinear Spectroscopy for the Rapid Acquisition of Coherent Dynamics.
Senlik, S Seckin; Policht, Veronica R; Ogilvie, Jennifer P
2015-07-02
There has been considerable recent interest in the observation of coherent dynamics in photosynthetic systems by 2D electronic spectroscopy (2DES). In particular, coherences that persist during the "waiting time" in a 2DES experiment have been attributed to electronic, vibrational, and vibronic origins in various systems. The typical method for characterizing these coherent dynamics requires the acquisition of 2DES spectra as a function of waiting time, essentially a 3DES measurement. Such experiments require lengthy data acquisition times that degrade the signal-to-noise of the recorded coherent dynamics. We present a rapid and high signal-to-noise pulse-shaping-based approach for the characterization of coherent dynamics. Using chlorophyll a, we demonstrate that this method retains much of the information content of a 3DES measurement and provides insight into the physical origin of the coherent dynamics, distinguishing between ground and excited state coherences. It also enables high resolution determination of ground and excited state frequencies.
NASA Technical Reports Server (NTRS)
Rogers, J. L., Jr.
1973-01-01
The NASTRAN computer program is capable of executing on three different types of computers: (1) the CDC 6000 series, (2) the IBM 360-370 series, and (3) the Univac 1100 series. A typical activity requiring transfer of data between dissimilar computers is the analysis of a large structure such as the space shuttle by substructuring. Models of portions of the vehicle which have been analyzed by subcontractors using their computers must be integrated into a model of the complete structure by the prime contractor on his computer. Presently the transfer of NASTRAN matrices or tables between two different types of computers is accomplished by punched cards or a magnetic tape containing card images. These methods of data transfer do not satisfy the requirements for intercomputer data transfer associated with a substructuring activity. To provide a more satisfactory transfer of data, two new programs, RDUSER and WRTUSER, were created.
The rotating spectrometer: Biotechnology for cell separations
NASA Technical Reports Server (NTRS)
Noever, David A.
1991-01-01
An instrument for biochemical studies, called the rotating spectrometer, separates previously inseparable cell cultures. The rotating spectrometer is intended for use in pharmacological studies which require fractional splitting of heterogeneous cell cultures based on cell morphology and swimming behavior. As a method to separate and concentrate cells in free solution, the rotating method requires active organism participation and can effectively split the large class of organisms known to form spontaneous patterns. Examples include the biochemical star, an organism called Tetrahymena pyriformis. Following focusing in a rotating frame, the separation is accomplished using different radial dependencies of concentrated algal and protozoan species. The focusing itself appears as concentric rings and arises from the coupling between swimming direction and Coriolis forces. A dense cut is taken at varying radii, and extraction is replenished at an inlet. Unlike standard separation and concentrating techniques such as filtration or centrifugation, the instrument is able to separate motile from immotile fractions. For a single pass, typical split efficiencies can reach 200 to 300 percent compared to the inlet concentration.
The rotating spectrometer: New biotechnology for cell separations
NASA Technical Reports Server (NTRS)
Noever, David A.; Matsos, Helen C.
1990-01-01
An instrument for biochemical studies, called the rotating spectrometer, separates previously inseparable cell cultures. The rotating spectrometer is intended for use in pharmacological studies which require fractional splitting of heterogeneous cell cultures based on cell morphology and swimming behavior. As a method to separate and concentrate cells in free solution, the rotating method requires active organism participation and can effectively split the large class of organisms known to form spontaneous patterns. Examples include the biochemical star, an organism called Tetrahymena pyriformis. Following focusing in a rotated frame, the separation is accomplished using different radial dependencies of concentrated algal and protozoan species. The focusing itself appears as concentric rings and arises from the coupling between swimming direction and Coriolis forces. A dense cut is taken at varying radii and extraction is replenished at an inlet. Unlike standard separation and concentrating techniques such as filtration or centrifugation, the instrument is able to separate motile from immotile fractions. For a single pass, typical split efficiencies can reach 200 to 300 percent compared to the inlet concentration.
A trajectory design method via target practice for air-breathing hypersonic vehicle
NASA Astrophysics Data System (ADS)
Kong, Xue; Yang, Ming; Ning, Guodong; Wang, Songyan; Chao, Tao
2017-11-01
There are strong coupling interactions between aerodynamics and scramjet, this kind of aircraft also has multiple restrictions, such as the range and difference of dynamic pressure, airflow, and fuel. On the one hand, we need balance the requirement between maneuverability of vehicle and stabilization of scramjet. On the other hand, we need harmonize the change of altitude and the velocity. By describing aircraft's index system of climbing capability, acceleration capability, the coupling degree in aerospace, this paper further propose a rapid design method which based on target practice. This method aimed for reducing the coupling degree, it depresses the coupling between aircraft and engine in navigation phase, satisfy multiple restriction conditions to leave some control buffer and create good condition for control implementation. According to the simulation, this method could be used for multiple typical fly commissions such as climbing, acceleration or both.
Model-based estimation and control for off-axis parabolic mirror alignment
NASA Astrophysics Data System (ADS)
Fang, Joyce; Savransky, Dmitry
2018-02-01
This paper propose an model-based estimation and control method for an off-axis parabolic mirror (OAP) alignment. Current studies in automated optical alignment systems typically require additional wavefront sensors. We propose a self-aligning method using only focal plane images captured by the existing camera. Image processing methods and Karhunen-Loève (K-L) decomposition are used to extract measurements for the observer in closed-loop control system. Our system has linear dynamic in state transition, and a nonlinear mapping from the state to the measurement. An iterative extended Kalman filter (IEKF) is shown to accurately predict the unknown states, and nonlinear observability is discussed. Linear-quadratic regulator (LQR) is applied to correct the misalignments. The method is validated experimentally on the optical bench with a commercial OAP. We conduct 100 tests in the experiment to demonstrate the consistency in between runs.
Moscoso del Prado Martín, Fermín
2013-12-01
I introduce the Bayesian assessment of scaling (BAS), a simple but powerful Bayesian hypothesis contrast methodology that can be used to test hypotheses on the scaling regime exhibited by a sequence of behavioral data. Rather than comparing parametric models, as typically done in previous approaches, the BAS offers a direct, nonparametric way to test whether a time series exhibits fractal scaling. The BAS provides a simpler and faster test than do previous methods, and the code for making the required computations is provided. The method also enables testing of finely specified hypotheses on the scaling indices, something that was not possible with the previously available methods. I then present 4 simulation studies showing that the BAS methodology outperforms the other methods used in the psychological literature. I conclude with a discussion of methodological issues on fractal analyses in experimental psychology. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Method for quick thermal tolerancing of optical systems
NASA Astrophysics Data System (ADS)
Werschnik, J.; Uhlendorf, K.
2016-09-01
Optical systems for lithography (projection lens), inspection (micro-objectives) or laser material processing usually have tight specifications regarding focus and wave-front stability. The same is true regarding the field dependent properties. Especially projection lenses have tight specifications on field curvature, magnification and distortion. Unwanted heating either from internal or external sources lead to undesired changes of the above properties. In this work we show an elegant and fast method to analyze the thermal sensitivity using ZEMAX. The key point of this method is using the thermal changes of the lens data from the multi-configuration editor as starting point for a (standard) tolerance analysis. Knowing the sensitivity we can either define requirements on the environment or use it to systematically improve the thermal behavior of the lens. We demonstrate this method for a typical projection lens for which we optimized the thermal field curvature to a minimum.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less
Approximating high-dimensional dynamics by barycentric coordinates with linear programming.
Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
Automatic face recognition in HDR imaging
NASA Astrophysics Data System (ADS)
Pereira, Manuela; Moreno, Juan-Carlos; Proença, Hugo; Pinheiro, António M. G.
2014-05-01
The gaining popularity of the new High Dynamic Range (HDR) imaging systems is raising new privacy issues caused by the methods used for visualization. HDR images require tone mapping methods for an appropriate visualization on conventional and non-expensive LDR displays. These visualization methods might result in completely different visualization raising several issues on privacy intrusion. In fact, some visualization methods result in a perceptual recognition of the individuals, while others do not even show any identity. Although perceptual recognition might be possible, a natural question that can rise is how computer based recognition will perform using tone mapping generated images? In this paper, a study where automatic face recognition using sparse representation is tested with images that result from common tone mapping operators applied to HDR images. Its ability for the face identity recognition is described. Furthermore, typical LDR images are used for the face recognition training.
Core Training in Low Back Disorders: Role of the Pilates Method.
Joyce, Andrew A; Kotler, Dana H
The Pilates method is a system of exercises developed by Joseph Pilates, which emphasizes recruitment and strengthening of the core muscles, flexibility, and breathing, to promote stability and control of movement. Its focus bears similarity to current evidence-based exercise programs for low back disorders. Spinal stability is a function of three interdependent systems, osseoligamentous, muscular, and neural control; exercise addresses both the muscular and neural function. The "core" typically refers to the muscular control required to maintain functional stability. Prior research has highlighted the importance of muscular strength and recruitment, with debate over the importance of individual muscles in the wider context of core control. Though developed long before the current evidence, the Pilates method is relevant in this setting and clearly relates to current evidence-based exercise interventions. Current literature supports the Pilates method as a treatment for low back disorders, but its benefit when compared with other exercise is less clear.
Orbiter Kapton wire operational requirements and experience
NASA Technical Reports Server (NTRS)
Peterson, R. V.
1994-01-01
The agenda of this presentation includes the Orbiter wire selection requirements, the Orbiter wire usage, fabrication and test requirements, typical wiring installations, Kapton wire experience, NASA Kapton wire testing, summary, and backup data.
Noniterative MAP reconstruction using sparse matrix representations.
Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J
2009-09-01
We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.
Partition of unity finite element method for quantum mechanical materials calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pask, J. E.; Sukumar, N.
The current state of the art for large-scale quantum-mechanical simulations is the planewave (PW) pseudopotential method, as implemented in codes such as VASP, ABINIT, and many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires significant nonlocal communications, which limit parallel efficiency. Real-space methods such as finite-differences (FD) and finite-elements (FE) have partially addressed both resolution and parallel-communications issues but have been plagued by one key disadvantage relative tomore » PW: excessive number of degrees of freedom (basis functions) needed to achieve the required accuracies. In this paper, we present a real-space partition of unity finite element (PUFE) method to solve the Kohn–Sham equations of density functional theory. In the PUFE method, we build the known atomic physics into the solution process using partition-of-unity enrichment techniques in finite element analysis. The method developed herein is completely general, applicable to metals and insulators alike, and particularly efficient for deep, localized potentials, as occur in calculations at extreme conditions of pressure and temperature. Full self-consistent Kohn–Sham calculations are presented for LiH, involving light atoms, and CeAl, involving heavy atoms with large numbers of atomic-orbital enrichments. We find that the new PUFE approach attains the required accuracies with substantially fewer degrees of freedom, typically by an order of magnitude or more, than the PW method. As a result, we compute the equation of state of LiH and show that the computed lattice constant and bulk modulus are in excellent agreement with reference PW results, while requiring an order of magnitude fewer degrees of freedom to obtain.« less
Partition of unity finite element method for quantum mechanical materials calculations
Pask, J. E.; Sukumar, N.
2016-11-09
The current state of the art for large-scale quantum-mechanical simulations is the planewave (PW) pseudopotential method, as implemented in codes such as VASP, ABINIT, and many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires significant nonlocal communications, which limit parallel efficiency. Real-space methods such as finite-differences (FD) and finite-elements (FE) have partially addressed both resolution and parallel-communications issues but have been plagued by one key disadvantage relative tomore » PW: excessive number of degrees of freedom (basis functions) needed to achieve the required accuracies. In this paper, we present a real-space partition of unity finite element (PUFE) method to solve the Kohn–Sham equations of density functional theory. In the PUFE method, we build the known atomic physics into the solution process using partition-of-unity enrichment techniques in finite element analysis. The method developed herein is completely general, applicable to metals and insulators alike, and particularly efficient for deep, localized potentials, as occur in calculations at extreme conditions of pressure and temperature. Full self-consistent Kohn–Sham calculations are presented for LiH, involving light atoms, and CeAl, involving heavy atoms with large numbers of atomic-orbital enrichments. We find that the new PUFE approach attains the required accuracies with substantially fewer degrees of freedom, typically by an order of magnitude or more, than the PW method. As a result, we compute the equation of state of LiH and show that the computed lattice constant and bulk modulus are in excellent agreement with reference PW results, while requiring an order of magnitude fewer degrees of freedom to obtain.« less
A simple headspace equilibration method for measuring dissolved methane
Magen, C; Lapham, L.L.; Pohlman, John W.; Marshall, Kristin N.; Bosman, S.; Casso, Michael; Chanton, J.P.
2014-01-01
Dissolved methane concentrations in the ocean are close to equilibrium with the atmosphere. Because methane is only sparingly soluble in seawater, measuring it without contamination is challenging for samples collected and processed in the presence of air. Several methods for analyzing dissolved methane are described in the literature, yet none has conducted a thorough assessment of the method yield, contamination issues during collection, transport and storage, and the effect of temperature changes and preservative. Previous extraction methods transfer methane from water to gas by either a "sparge and trap" or a "headspace equilibration" technique. The gas is then analyzed for methane by gas chromatography. Here, we revisit the headspace equilibration technique and describe a simple, inexpensive, and reliable method to measure methane in fresh and seawater, regardless of concentration. Within the range of concentrations typically found in surface seawaters (2-1000 nmol L-1), the yield of the method nears 100% of what is expected from solubility calculation following the addition of known amount of methane. In addition to being sensitive (detection limit of 0.1 ppmv, or 0.74 nmol L-1), this method requires less than 10 min per sample, and does not use highly toxic chemicals. It can be conducted with minimum materials and does not require the use of a gas chromatograph at the collection site. It can therefore be used in various remote working environments and conditions.
Rapid extraction of image texture by co-occurrence using a hybrid data structure
NASA Astrophysics Data System (ADS)
Clausi, David A.; Zhao, Yongping
2002-07-01
Calculation of co-occurrence probabilities is a popular method for determining texture features within remotely sensed digital imagery. Typically, the co-occurrence features are calculated by using a grey level co-occurrence matrix (GLCM) to store the co-occurring probabilities. Statistics are applied to the probabilities in the GLCM to generate the texture features. This method is computationally intensive since the matrix is usually sparse leading to many unnecessary calculations involving zero probabilities when applying the statistics. An improvement on the GLCM method is to utilize a grey level co-occurrence linked list (GLCLL) to store only the non-zero co-occurring probabilities. The GLCLL suffers since, to achieve preferred computational speeds, the list should be sorted. An improvement on the GLCLL is to utilize a grey level co-occurrence hybrid structure (GLCHS) based on an integrated hash table and linked list approach. Texture features obtained using this technique are identical to those obtained using the GLCM and GLCLL. The GLCHS method is implemented using the C language in a Unix environment. Based on a Brodatz test image, the GLCHS method is demonstrated to be a superior technique when compared across various window sizes and grey level quantizations. The GLCHS method required, on average, 33.4% ( σ=3.08%) of the computational time required by the GLCLL. Significant computational gains are made using the GLCHS method.
Application of the SNoW machine learning paradigm to a set of transportation imaging problems
NASA Astrophysics Data System (ADS)
Paul, Peter; Burry, Aaron M.; Wang, Yuheng; Kozitsky, Vladimir
2012-01-01
Machine learning methods have been successfully applied to image object classification problems where there is clear distinction between classes and where a comprehensive set of training samples and ground truth are readily available. The transportation domain is an area where machine learning methods are particularly applicable, since the classification problems typically have well defined class boundaries and, due to high traffic volumes in most applications, massive roadway data is available. Though these classes tend to be well defined, the particular image noises and variations can be challenging. Another challenge is the extremely high accuracy typically required in most traffic applications. Incorrect assignment of fines or tolls due to imaging mistakes is not acceptable in most applications. For the front seat vehicle occupancy detection problem, classification amounts to determining whether one face (driver only) or two faces (driver + passenger) are detected in the front seat of a vehicle on a roadway. For automatic license plate recognition, the classification problem is a type of optical character recognition problem encompassing multiple class classification. The SNoW machine learning classifier using local SMQT features is shown to be successful in these two transportation imaging applications.
Han, Songshan; Jiao, Zongxia; Yao, Jianyong; Shang, Yaoxing
2014-09-01
An electro-hydraulic load simulator (EHLS) is a typical case of torque systems with strong external disturbances from hydraulic motion systems. A new velocity synchronizing compensation strategy is proposed in this paper to eliminate motion disturbances, based on theoretical and experimental analysis of a structure invariance method and traditional velocity synchronizing compensation controller (TVSM). This strategy only uses the servo-valve's control signal of motion system and torque feedback of torque system, which could avoid the requirement on the velocity and acceleration signal in the structure invariance method, and effectively achieve a more accurate velocity synchronizing compensation in large loading conditions than a TVSM. In order to facilitate the implementation of this strategy in engineering cases, the selection rules for compensation parameters are proposed. It does not rely on any accurate information of structure parameters. This paper presents the comparison data of an EHLS with various typical operating conditions using three controllers, i.e., closed loop proportional integral derivative (PID) controller, TVSM, and the proposed improved velocity synchronizing controller. Experiments are conducted to confirm that the new strategy performs well against motion disturbances. It is more effective to improve the tracking accuracy and is a more appropriate choice for engineering applications.
NASA Astrophysics Data System (ADS)
Chan, Chun-Kai; Loh, Chin-Hsiung; Wu, Tzu-Hsiu
2015-04-01
In civil engineering, health monitoring and damage detection are typically carry out by using a large amount of sensors. Typically, most methods require global measurements to extract the properties of the structure. However, some sensors, like LVDT, cannot be used due to in situ limitation so that the global deformation remains unknown. An experiment is used to demonstrate the proposed algorithms: a one-story 2-bay reinforce concrete frame under weak and strong seismic excitation. In this paper signal processing techniques and nonlinear identification are used and applied to the response measurements of seismic response of reinforced concrete structures subject to different level of earthquake excitations. Both modal-based and signal-based system identification and feature extraction techniques are used to study the nonlinear inelastic response of RC frame using both input and output response data or output only measurement. From the signal-based damage identification method, which include the enhancement of time-frequency analysis of acceleration responses and the estimation of permanent deformation using directly from acceleration response data. Finally, local deformation measurement from dense optical tractor is also use to quantify the damage of the RC frame structure.
Shape optimization techniques for musical instrument design
NASA Astrophysics Data System (ADS)
Henrique, Luis; Antunes, Jose; Carvalho, Joao S.
2002-11-01
The design of musical instruments is still mostly based on empirical knowledge and costly experimentation. One interesting improvement is the shape optimization of resonating components, given a number of constraints (allowed parameter ranges, shape smoothness, etc.), so that vibrations occur at specified modal frequencies. Each admissible geometrical configuration generates an error between computed eigenfrequencies and the target set. Typically, error surfaces present many local minima, corresponding to suboptimal designs. This difficulty can be overcome using global optimization techniques, such as simulated annealing. However these methods are greedy, concerning the number of function evaluations required. Thus, the computational effort can be unacceptable if complex problems, such as bell optimization, are tackled. Those issues are addressed in this paper, and a method for improving optimization procedures is proposed. Instead of using the local geometric parameters as searched variables, the system geometry is modeled in terms of truncated series of orthogonal space-funcitons, and optimization is performed on their amplitude coefficients. Fourier series and orthogonal polynomials are typical such functions. This technique reduces considerably the number of searched variables, and has a potential for significant computational savings in complex problems. It is illustrated by optimizing the shapes of both current and uncommon marimba bars.
Manual fire suppression methods on typical machinery space spray fires
NASA Astrophysics Data System (ADS)
Carhart, H. W.; Leonard, J. T.; Budnick, E. K.; Ouellette, R. J.; Shanley, J. H., Jr.
1990-07-01
A series of tests was conducted to evaluate the effectiveness of Aqueous Film Forming Foam (AFFF), potassium bicarbonate powder (PKP) and Halon 1211, alone and in various combinations, in extinguishing spray fires. The sprays were generated by JP-5 jet fuel issuing from an open sounding tube, and open petcock, a leaking flange or a slit pipe, and contacting an ignition source. The results indicate that typical fuel spray fires, such as those simulated in this series, are very severe. Flame heights ranged from 6.1 m (20 ft) for the split pipe to 15.2 m (50 ft) for the sounding tube scenario. These large flame geometries were accompanied by heat release rates of 6 MW to greater than 50 MW, and hazardous thermal radiation levels in the near field environment, up to 9.1 m (30 ft) away. Successful suppression of these fires requires both a significant reduction in flame radiation and delivery of a suppression agent to shielded areas. Of the nine suppression methods tested, the 95 gpm AFFF hand line and the hand line in conjunction with PKP were particularly effective in reducing the radiant flux.
Modernizing Earth and Space Science Modeling Workflows in the Big Data Era
NASA Astrophysics Data System (ADS)
Kinter, J. L.; Feigelson, E.; Walker, R. J.; Tino, C.
2017-12-01
Modeling is a major aspect of the Earth and space science research. The development of numerical models of the Earth system, planetary systems or astrophysical systems is essential to linking theory with observations. Optimal use of observations that are quite expensive to obtain and maintain typically requires data assimilation that involves numerical models. In the Earth sciences, models of the physical climate system are typically used for data assimilation, climate projection, and inter-disciplinary research, spanning applications from analysis of multi-sensor data sets to decision-making in climate-sensitive sectors with applications to ecosystems, hazards, and various biogeochemical processes. In space physics, most models are from first principles, require considerable expertise to run and are frequently modified significantly for each case study. The volume and variety of model output data from modeling Earth and space systems are rapidly increasing and have reached a scale where human interaction with data is prohibitively inefficient. A major barrier to progress is that modeling workflows isn't deemed by practitioners to be a design problem. Existing workflows have been created by a slow accretion of software, typically based on undocumented, inflexible scripts haphazardly modified by a succession of scientists and students not trained in modern software engineering methods. As a result, existing modeling workflows suffer from an inability to onboard new datasets into models; an inability to keep pace with accelerating data production rates; and irreproducibility, among other problems. These factors are creating an untenable situation for those conducting and supporting Earth system and space science. Improving modeling workflows requires investments in hardware, software and human resources. This paper describes the critical path issues that must be targeted to accelerate modeling workflows, including script modularization, parallelization, and automation in the near term, and longer term investments in virtualized environments for improved scalability, tolerance for lossy data compression, novel data-centric memory and storage technologies, and tools for peer reviewing, preserving and sharing workflows, as well as fundamental statistical and machine learning algorithms.
NASA Astrophysics Data System (ADS)
Zhong, Qiu-Xiang; Wu, Chuan-Sheng; Shu, Qiao-Ling; Liu, Ryan Wen
2018-04-01
Image deblurring under impulse noise is a typical ill-posed problem which requires regularization methods to guarantee high-quality imaging. L1-norm data-fidelity term and total variation (TV) regularizer have been combined to contribute the popular regularization method. However, the TV-regularized variational image deblurring model often suffers from the staircase-like artifacts leading to image quality degradation. To enhance image quality, the detailpreserving total generalized variation (TGV) was introduced to replace TV to eliminate the undesirable artifacts. The resulting nonconvex optimization problem was effectively solved using the alternating direction method of multipliers (ADMM). In addition, an automatic method for selecting spatially adapted regularization parameters was proposed to further improve deblurring performance. Our proposed image deblurring framework is able to remove blurring and impulse noise effects while maintaining the image edge details. Comprehensive experiments have been conducted to demonstrate the superior performance of our proposed method over several state-of-the-art image deblurring methods.
A study of numerical methods for hyperbolic conservation laws with stiff source terms
NASA Technical Reports Server (NTRS)
Leveque, R. J.; Yee, H. C.
1988-01-01
The proper modeling of nonequilibrium gas dynamics is required in certain regimes of hypersonic flow. For inviscid flow this gives a system of conservation laws coupled with source terms representing the chemistry. Often a wide range of time scales is present in the problem, leading to numerical difficulties as in stiff systems of ordinary differential equations. Stability can be achieved by using implicit methods, but other numerical difficulties are observed. The behavior of typical numerical methods on a simple advection equation with a parameter-dependent source term was studied. Two approaches to incorporate the source term were utilized: MacCormack type predictor-corrector methods with flux limiters, and splitting methods in which the fluid dynamics and chemistry are handled in separate steps. Various comparisons over a wide range of parameter values were made. In the stiff case where the solution contains discontinuities, incorrect numerical propagation speeds are observed with all of the methods considered. This phenomenon is studied and explained.
V/STOL propulsion control analysis: Phase 2, task 5-9
NASA Technical Reports Server (NTRS)
1981-01-01
Typical V/STOL propulsion control requirements were derived for transition between vertical and horizontal flight using the General Electric RALS (Remote Augmented Lift System) concept. Steady-state operating requirements were defined for a typical Vertical-to-Horizontal transition and for a typical Horizontal-to-Vertical transition. Control mode requirements were established and multi-variable regulators developed for individual operating conditions. Proportional/Integral gain schedules were developed and were incorporated into a transition controller with capabilities for mode switching and manipulated variable reassignment. A non-linear component-level transient model of the engine was developed and utilized to provide a preliminary check-out of the controller logic. An inlet and nozzle effects model was developed for subsequent incorporation into the engine model and an aircraft model was developed for preliminary flight transition simulations. A condition monitoring development plan was developed and preliminary design requirements established. The Phase 1 long-range technology plan was refined and restructured toward the development of a real-time high fidelity transient model of a supersonic V/STOL propulsion system and controller for use in a piloted simulation program at NASA-Ames.
Barall, Michael
2009-01-01
We present a new finite-element technique for calculating dynamic 3-D spontaneous rupture on an earthquake fault, which can reduce the required computational resources by a factor of six or more, without loss of accuracy. The grid-doubling technique employs small cells in a thin layer surrounding the fault. The remainder of the modelling volume is filled with larger cells, typically two or four times as large as the small cells. In the resulting non-conforming mesh, an interpolation method is used to join the thin layer of smaller cells to the volume of larger cells. Grid-doubling is effective because spontaneous rupture calculations typically require higher spatial resolution on and near the fault than elsewhere in the model volume. The technique can be applied to non-planar faults by morphing, or smoothly distorting, the entire mesh to produce the desired 3-D fault geometry. Using our FaultMod finite-element software, we have tested grid-doubling with both slip-weakening and rate-and-state friction laws, by running the SCEC/USGS 3-D dynamic rupture benchmark problems. We have also applied it to a model of the Hayward fault, Northern California, which uses realistic fault geometry and rock properties. FaultMod implements fault slip using common nodes, which represent motion common to both sides of the fault, and differential nodes, which represent motion of one side of the fault relative to the other side. We describe how to modify the traction-at-split-nodes method to work with common and differential nodes, using an implicit time stepping algorithm.
Preconditioned conjugate gradient wave-front reconstructors for multiconjugate adaptive optics
NASA Astrophysics Data System (ADS)
Gilles, Luc; Ellerbroek, Brent L.; Vogel, Curtis R.
2003-09-01
Multiconjugate adaptive optics (MCAO) systems with 104-105 degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wave-front control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of adaptive optics degrees of freedom. We develop scalable open-loop iterative sparse matrix implementations of minimum variance wave-front reconstruction for telescope diameters up to 32 m with more than 104 actuators. The basic approach is the preconditioned conjugate gradient method with an efficient preconditioner, whose block structure is defined by the atmospheric turbulent layers very much like the layer-oriented MCAO algorithms of current interest. Two cost-effective preconditioners are investigated: a multigrid solver and a simpler block symmetric Gauss-Seidel (BSGS) sweep. Both options require off-line sparse Cholesky factorizations of the diagonal blocks of the matrix system. The cost to precompute these factors scales approximately as the three-halves power of the number of estimated phase grid points per atmospheric layer, and their average update rate is typically of the order of 10-2 Hz, i.e., 4-5 orders of magnitude lower than the typical 103 Hz temporal sampling rate. All other computations scale almost linearly with the total number of estimated phase grid points. We present numerical simulation results to illustrate algorithm convergence. Convergence rates of both preconditioners are similar, regardless of measurement noise level, indicating that the layer-oriented BSGS sweep is as effective as the more elaborated multiresolution preconditioner.
Choi, Jung-Pyung; Weil, Kenneth Scott
2016-11-01
Methods of aluminizing the surface of a metal substrate. The methods of the present invention do not require establishment of a vacuum or a reducing atmosphere, as is typically necessary. Accordingly, aluminization can occur in the presence of oxygen, which greatly simplifies and reduces processing costs by allowing deposition of the aluminum coating to be performed, for example, in air. Embodiments of the present invention can be characterized by applying a slurry that includes a binder and powder granules containing aluminum to the metal substrate surface. Then, in a combined step, a portion of the aluminum is diffused into the substrate and a portion of the aluminum is oxidized by heating the slurry to a temperature greater than the melting point of the aluminum in an oxygen-containing atmosphere.
Packialakshmi, R M; Usha, R
2011-12-01
Vernonia yellow vein virus (VeYVV) is a distinct monopartite begomovirus associated with a satellite DNA β. After constructing dimers of both DNA A and DNA β in binary vectors, a number of infection methods were attempted. However, only a modified stem-prick method produced up to 83% infection in the natural host Vernonia cinerea, thus, fulfilling the Koch's postulate. The presence of the viral DNA in the agroinfected plants was confirmed by rolling circle amplification (RCA), followed by Southern hybridization. DNA β induces typical symptoms of Vernonia yellow vein disease (VeYVD) when co-agroinoculated with the begomovirus to Vernonia and also leads to the accumulation of DNA A systemically. VeYVV represents a new member of the emerging group of monopartite begomoviruses requiring a satellite component for symptom induction.
A comparison of approaches for finding minimum identifying codes on graphs
NASA Astrophysics Data System (ADS)
Horan, Victoria; Adachi, Steve; Bak, Stanley
2016-05-01
In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a typical computer. One sample problem explored is that of finding a minimum identifying code. To work around the computational issues, a variety of methods are explored and consist of a parallel computing approach using MATLAB, an adiabatic quantum optimization approach using a D-Wave quantum annealing processor, and lastly using satisfiability modulo theory (SMT) and corresponding SMT solvers. Each of these methods requires the problem to be formulated in a unique manner. In this paper, we address the challenges of computing solutions to this NP-hard problem with respect to each of these methods.
Computing diffusivities from particle models out of equilibrium
NASA Astrophysics Data System (ADS)
Embacher, Peter; Dirr, Nicolas; Zimmer, Johannes; Reina, Celia
2018-04-01
A new method is proposed to numerically extract the diffusivity of a (typically nonlinear) diffusion equation from underlying stochastic particle systems. The proposed strategy requires the system to be in local equilibrium and have Gaussian fluctuations but it is otherwise allowed to undergo arbitrary out-of-equilibrium evolutions. This could be potentially relevant for particle data obtained from experimental applications. The key idea underlying the method is that finite, yet large, particle systems formally obey stochastic partial differential equations of gradient flow type satisfying a fluctuation-dissipation relation. The strategy is here applied to three classic particle models, namely independent random walkers, a zero-range process and a symmetric simple exclusion process in one space dimension, to allow the comparison with analytic solutions.
NASA Technical Reports Server (NTRS)
Lahti, G. P.
1972-01-01
A two- or three-constraint, two-dimensional radiation shield weight optimization procedure and a computer program, DOPEX, is described. The DOPEX code uses the steepest descent method to alter a set of initial (input) thicknesses for a shield configuration to achieve a minimum weight while simultaneously satisfying dose constaints. The code assumes an exponential dose-shield thickness relation with parameters specified by the user. The code also assumes that dose rates in each principal direction are dependent only on thicknesses in that direction. Code input instructions, FORTRAN 4 listing, and a sample problem are given. Typical computer time required to optimize a seven-layer shield is about 0.1 minute on an IBM 7094-2.
Method for fabricating beryllium-based multilayer structures
Skulina, Kenneth M.; Bionta, Richard M.; Makowiecki, Daniel M.; Alford, Craig S.
2003-02-18
Beryllium-based multilayer structures and a process for fabricating beryllium-based multilayer mirrors, useful in the wavelength region greater than the beryllium K-edge (111 .ANG. or 11.1 nm). The process includes alternating sputter deposition of beryllium and a metal, typically from the fifth row of the periodic table, such as niobium (Nb), molybdenum (Mo), ruthenium (Ru), and rhodium (Rh). The process includes not only the method of sputtering the materials, but the industrial hygiene controls for safe handling of beryllium. The mirrors made in accordance with the process may be utilized in soft x-ray and extreme-ultraviolet projection lithography, which requires mirrors of high reflectivity (>60%) for x-rays in the range of 60-140 .ANG. (60-14.0 nm).
NASA Technical Reports Server (NTRS)
Thelen, Brian J.; Paxman, Richard G.
1994-01-01
The method of phase diversity has been used in the context of incoherent imaging to estimate jointly an object that is being imaged and phase aberrations induced by atmospheric turbulence. The method requires a parametric model for the phase-aberration function. Typically, the parameters are coefficients to a finite set of basis functions. Care must be taken in selecting a parameterization that properly balances accuracy in the representation of the phase-aberration function with stability in the estimates. It is well known that over parameterization can result in unstable estimates. Thus a certain amount of model mismatch is often desirable. We derive expressions that quantify the bias and variance in object and aberration estimates as a function of parameter dimension.
A fast image simulation algorithm for scanning transmission electron microscopy.
Ophus, Colin
2017-01-01
Image simulation for scanning transmission electron microscopy at atomic resolution for samples with realistic dimensions can require very large computation times using existing simulation algorithms. We present a new algorithm named PRISM that combines features of the two most commonly used algorithms, namely the Bloch wave and multislice methods. PRISM uses a Fourier interpolation factor f that has typical values of 4-20 for atomic resolution simulations. We show that in many cases PRISM can provide a speedup that scales with f 4 compared to multislice simulations, with a negligible loss of accuracy. We demonstrate the usefulness of this method with large-scale scanning transmission electron microscopy image simulations of a crystalline nanoparticle on an amorphous carbon substrate.
Ultrafast Pulse Sequencing for Fast Projective Measurements of Atomic Hyperfine Qubits
NASA Astrophysics Data System (ADS)
Ip, Michael; Ransford, Anthony; Campbell, Wesley
2015-05-01
Projective readout of quantum information stored in atomic hyperfine structure typically uses state-dependent CW laser-induced fluorescence. This method requires an often sophisticated imaging system to spatially filter out the background CW laser light. We present an alternative approach that instead uses simple pulse sequences from a mode-locked laser to affect the same state-dependent excitations in less than 1 ns. The resulting atomic fluorescence occurs in the dark, allowing the placement of non-imaging detectors right next to the atom to improve the qubit state detection efficiency and speed. We also discuss methods of Doppler cooling with mode-locked lasers for trapped ions, where the creation of the necessary UV light is often difficult with CW lasers.
Computational problems and signal processing in SETI
NASA Technical Reports Server (NTRS)
Deans, Stanley R.; Cullers, D. K.; Stauduhar, Richard
1991-01-01
The Search for Extraterrestrial Intelligence (SETI), currently being planned at NASA, will require that an enormous amount of data (on the order of 10 exp 11 distinct signal paths for a typical observation) be analyzed in real time by special-purpose hardware. Even though the SETI system design is not based on maximum entropy and Bayesian methods (partly due to the real-time processing constraint), it is expected that enough data will be saved to be able to apply these and other methods off line where computational complexity is not an overriding issue. Interesting computational problems that relate directly to the system design for processing such an enormous amount of data have emerged. Some of these problems are discussed, along with the current status on their solution.
NASA Technical Reports Server (NTRS)
Wang, P. K. C.; Hadaegh, F. Y.
1996-01-01
In modeling micromachined deformable mirrors with electrostatic actuators whose gap spacings are of the same order of magnitude as those of the surface deformations, it is necessary to use nonlinear models for the actuators. In this paper, we consider micromachined deformable mirrors modeled by a membrane or plate equation with nonlinear electrostatic actuator characteristics. Numerical methods for computing the mirror deformation due to given actuator voltages and the actuator voltages required for producing the desired deformations at the actuator locations are presented. The application of the proposed methods to circular deformable mirrors whose surfaces are modeled by elastic membranes is discussed in detail. Numerical results are obtained for a typical circular micromachined mirror with electrostatic actuators.
28 CFR 61.5 - Typical classes of action.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... These classes are: actions normally requiring environmental impact statements (EIS), actions normally not requiring assessments or EIS, and actions normally requiring assessments but not necessarily EIS...) Actions normally requiring EIS. None, except as noted in the appendices to this part. (2) Actions normally...
NASA Technical Reports Server (NTRS)
Vos, Gordon A.; Fink, Patrick; Ngo, Phong H.; Morency, Richard; Simon, Cory; Williams, Robert E.; Perez, Lance C.
2015-01-01
Space Human Factors and Habitability (SHFH) Element within the Human Research Program (HRP), in collaboration with the Behavioral Health and Performance (BHP) Element, is conducting research regarding Net Habitable Volume (NHV), the internal volume within a spacecraft or habitat that is available to crew for required activities, as well as layout and accommodations within that volume. NASA is looking for innovative methods to unobtrusively collect NHV data without impacting crew time. Data required includes metrics such as location and orientation of crew, volume used to complete tasks, internal translation paths, flow of work, and task completion times. In less constrained environments methods for collecting such data exist yet many are obtrusive and require significant post-processing. Example technologies used in terrestrial settings include infrared (IR) retro-reflective marker based motion capture, GPS sensor tracking, inertial tracking, and multiple camera filmography. However due to constraints of space operations many such methods are infeasible, such as inertial tracking systems which typically rely upon a gravity vector to normalize sensor readings, and traditional IR systems which are large and require extensive calibration. However multiple technologies have not yet been applied to space operations for these explicit purposes. Two of these include 3-Dimensional Radio Frequency Identification Real-Time Localization Systems (3D RFID-RTLS) and depth imaging systems which allow for 3D motion capture and volumetric scanning (such as those using IR-depth cameras like the Microsoft Kinect or Light Detection and Ranging / Light-Radar systems, referred to as LIDAR).
Moelleken, Jörg; Gassner, Christian; Lingke, Sabine; Tomaschek, Simone; Tyshchuk, Oksana; Lorenz, Stefan; Mølhøj, Michael
2017-01-01
ABSTRACT The determination of the binding strength of immunoglobulins (IgGs) to targets can be influenced by avidity when the targets are soluble di- or multimeric proteins, or associated to cell surfaces, including surfaces introduced from heterogeneous assays. However, for the understanding of the contribution of a second drug-to-target binding site in molecular design, or for ranking of monovalent binders during lead identification, affinity-based assessment of the binding strength is required. Typically, monovalent binders like antigen-binding fragments (Fabs) are generated by proteolytic cleavage with papain, which often results in a combination of under- and over-digestion, and requires specific optimization and chromatographic purification of the desired Fabs. Alternatively, the Fabs are produced by recombinant approaches. Here, we report a lean approach for the functional assessment of human IgG1s during lead identification based on an in-solution digestion with the GingisKHAN™ protease, generating a homogenous pool of intact Fabs and Fcs and enabling direct assaying of the Fab in the digestion mixture. The digest with GingisKHAN™ is highly specific and quantitative, does not require much optimization, and the protease does not interfere with methods typically applied for lead identification, such as surface plasmon resonance or cell-based assays. GingisKHAN™ is highly suited to differentiate between affinity and avidity driven binding of human IgG1 monoclonal and bispecific antibodies during lead identification. PMID:28805498
Automated Boundary Conditions for Wind Tunnel Simulations
NASA Technical Reports Server (NTRS)
Carlson, Jan-Renee
2018-01-01
Computational fluid dynamic (CFD) simulations of models tested in wind tunnels require a high level of fidelity and accuracy particularly for the purposes of CFD validation efforts. Considerable effort is required to ensure the proper characterization of both the physical geometry of the wind tunnel and recreating the correct flow conditions inside the wind tunnel. The typical trial-and-error effort used for determining the boundary condition values for a particular tunnel configuration are time and computer resource intensive. This paper describes a method for calculating and updating the back pressure boundary condition in wind tunnel simulations by using a proportional-integral-derivative controller. The controller methodology and equations are discussed, and simulations using the controller to set a tunnel Mach number in the NASA Langley 14- by 22-Foot Subsonic Tunnel are demonstrated.
African Primary Care Research: writing a research report.
Couper, Ian; Mash, Bob
2014-06-06
Presenting a research report is an important way of demonstrating one's ability to conduct research and is a requirement of most research-based degrees. Although known by various names across academic institutions, the structure required is mostly very similar, being based on the Introduction, Methods, Results, Discussion format of scientific articles.This article offers some guidance on the process of writing, aimed at helping readers to start and to continue their writing; and to assist them in presenting a report that is received positively by their readers, including examiners. It also details the typical components of the research report, providing some guidelines for each, as well as the pitfalls to avoid.This article is part of a series on African Primary Care Research that aims to build capacity for research particularly at a Master's level.
Spatially extended hybrid methods: a review
2018-01-01
Many biological and physical systems exhibit behaviour at multiple spatial, temporal or population scales. Multiscale processes provide challenges when they are to be simulated using numerical techniques. While coarser methods such as partial differential equations are typically fast to simulate, they lack the individual-level detail that may be required in regions of low concentration or small spatial scale. However, to simulate at such an individual level throughout a domain and in regions where concentrations are high can be computationally expensive. Spatially coupled hybrid methods provide a bridge, allowing for multiple representations of the same species in one spatial domain by partitioning space into distinct modelling subdomains. Over the past 20 years, such hybrid methods have risen to prominence, leading to what is now a very active research area across multiple disciplines including chemistry, physics and mathematics. There are three main motivations for undertaking this review. Firstly, we have collated a large number of spatially extended hybrid methods and presented them in a single coherent document, while comparing and contrasting them, so that anyone who requires a multiscale hybrid method will be able to find the most appropriate one for their need. Secondly, we have provided canonical examples with algorithms and accompanying code, serving to demonstrate how these types of methods work in practice. Finally, we have presented papers that employ these methods on real biological and physical problems, demonstrating their utility. We also consider some open research questions in the area of hybrid method development and the future directions for the field. PMID:29491179
I'll take that to go: Big data bags and minimal identifiers for exchange of large, complex datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chard, Kyle; D'Arcy, Mike; Heavner, Benjamin D.
Big data workflows often require the assembly and exchange of complex, multi-element datasets. For example, in biomedical applications, the input to an analytic pipeline can be a dataset consisting thousands of images and genome sequences assembled from diverse repositories, requiring a description of the contents of the dataset in a concise and unambiguous form. Typical approaches to creating datasets for big data workflows assume that all data reside in a single location, requiring costly data marshaling and permitting errors of omission and commission because dataset members are not explicitly specified. We address these issues by proposing simple methods and toolsmore » for assembling, sharing, and analyzing large and complex datasets that scientists can easily integrate into their daily workflows. These tools combine a simple and robust method for describing data collections (BDBags), data descriptions (Research Objects), and simple persistent identifiers (Minids) to create a powerful ecosystem of tools and services for big data analysis and sharing. We present these tools and use biomedical case studies to illustrate their use for the rapid assembly, sharing, and analysis of large datasets.« less
Korir, Geoffrey; Karam, P Andrew
2018-06-11
In the event of a significant radiological release in a major urban area where a large number of people reside, it is inevitable that radiological screening and dose assessment must be conducted. Lives may be saved if an emergency response plan and radiological screening method are established for use in such cases. Thousands to tens of thousands of people might present themselves with some levels of external contamination and/or the potential for internal contamination. Each of these individuals will require varying degrees of radiological screening, and those with a high likelihood of internal and/or external contamination will require radiological assessment to determine the need for medical attention and decontamination. This sort of radiological assessment typically requires skilled health physicists, but there are insufficient numbers of health physicists in any city to perform this function for large populations, especially since many (e.g., those at medical facilities) are likely to be engaged at their designated institutions. The aim of this paper is therefore to develop and describe the technical basis for a novel, scoring-based methodology that can be used by non-health physicists for performing radiological assessment during such radiological events.
Patterned Growth of Carbon Nanotubes or Nanofibers
NASA Technical Reports Server (NTRS)
Delzeit, Lance D.
2004-01-01
A method and apparatus for the growth of carbon nanotubes or nanofibers in a desired pattern has been invented. The essence of the method is to grow the nanotubes or nanofibers by chemical vapor deposition (CVD) onto a patterned catalyst supported by a substrate. The figure schematically depicts salient aspects of the method and apparatus in a typical application. A substrate is placed in a chamber that contains both ion-beam sputtering and CVD equipment. The substrate can be made of any of a variety of materials that include several forms of silicon or carbon, and selected polymers, metals, ceramics, and even some natural minerals and similar materials. Optionally, the substrate is first coated with a noncatalytic metal layer (which could be a single layer or could comprise multiple different sublayers) by ion-beam sputtering. The choice of metal(s) and thickness(es) of the first layer (if any) and its sublayers (if any) depends on the chemical and electrical properties required for subsequent deposition of the catalyst and the subsequent CVD of the carbon nanotubes. A typical first-sublayer metal is Pt, Pd, Cr, Mo, Ti, W, or an alloy of two or more of these elements. A typical metal for the second sublayer or for an undivided first layer is Al at a thickness .1 nm or Ir at a thickness .5 nm. Proper choice of the metal for a second sublayer of a first layer makes it possible to use a catalyst that is chemically incompatible with the substrate. In the next step, a mask having holes in the desired pattern is placed over the coated substrate. The catalyst is then deposited on the coated substrate by ion-beam sputtering through the mask. Optionally, the catalyst could be deposited by a technique other than sputtering and/or patterned by use of photolithography, electron- beam lithography, or another suitable technique. The catalytic metal can be Fe, Co, Ni, or an alloy of two or more of these elements, deposited to a typical thickness in the range from 0.1 to 20 nm.
Lightweight Adaptation of Classifiers to Users and Contexts: Trends of the Emerging Domain
Vildjiounaite, Elena; Gimel'farb, Georgy; Kyllönen, Vesa; Peltola, Johannes
2015-01-01
Intelligent computer applications need to adapt their behaviour to contexts and users, but conventional classifier adaptation methods require long data collection and/or training times. Therefore classifier adaptation is often performed as follows: at design time application developers define typical usage contexts and provide reasoning models for each of these contexts, and then at runtime an appropriate model is selected from available ones. Typically, definition of usage contexts and reasoning models heavily relies on domain knowledge. However, in practice many applications are used in so diverse situations that no developer can predict them all and collect for each situation adequate training and test databases. Such applications have to adapt to a new user or unknown context at runtime just from interaction with the user, preferably in fairly lightweight ways, that is, requiring limited user effort to collect training data and limited time of performing the adaptation. This paper analyses adaptation trends in several emerging domains and outlines promising ideas, proposed for making multimodal classifiers user-specific and context-specific without significant user efforts, detailed domain knowledge, and/or complete retraining of the classifiers. Based on this analysis, this paper identifies important application characteristics and presents guidelines to consider these characteristics in adaptation design. PMID:26473165
Passive solar design strategies: Remodeling guidelines for conserving energy at home
NASA Astrophysics Data System (ADS)
The idea of passive solar is simple, but applying it effectively does require information and attention to the details of design and construction. Some passive solar techniques are modest and low-cost, and require only small changes in remodeler's typical practice. At the other end of the spectrum, some passive solar systems can almost eliminate a house's need for purchased heating (and in some cases, cooling) energy - but probably at a relatively high first cost. In between are a broad range of energy-conserving passive solar techniques. Whether or not they are cost-effective, practical, and attractive enough to offer a market advantage to any individual remodeler depends on very specific factors such as local costs, climate, and market characteristics. Passive Solar Design Strategies: Remodeling Guidelines For Conserving Energy At Home is written to help give remodelers the information they need to make these decisions. Passive Solar Design Strategies is a package in three basic parts: the guidelines contain information about passive solar techniques and how they work, and provides specific examples of systems which will save various percentages of energy; the worksheets offer a simple, fill-in-the-blank method to pre-evaluate the performance of a specific design; and the worked example demonstrates how to complete the worksheets for a typical residence.
High Efficiency Microwave Power Amplifier: From the Lab to Industry
NASA Technical Reports Server (NTRS)
Sims, William Herbert, III; Bell, Joseph L. (Technical Monitor)
2001-01-01
Since the beginnings of space travel, various microwave power amplifier designs have been employed. These included Class-A, -B, and -C bias arrangements. However, shared limitation of these topologies is the inherent high total consumption of input power associated with the generation of radio frequency (RF)/microwave power. The power amplifier has always been the largest drain for the limited available power on the spacecraft. Typically, the conversion efficiency of a microwave power amplifier is 10 to 20%. For a typical microwave power amplifier of 20 watts, input DC power of at least 100 watts is required. Such a large demand for input power suggests that a better method of RF/microwave power generation is required. The price paid for using a linear amplifier where high linearity is unnecessary includes higher initial and operating costs, lower DC-to-RF conversion efficiency, high power consumption, higher power dissipation and the accompanying need for higher capacity heat removal means, and an amplifier that is more prone to parasitic oscillation. The first use of a higher efficiency mode of power generation was described by Baxandall in 1959. This higher efficiency mode, Class-D, is achieved through distinct switching techniques to reduce the power losses associated with switching, conduction, and gate drive losses of a given transistor.
Review: Feeding conserved forage to horses: recent advances and recommendations.
Harris, P A; Ellis, A D; Fradinho, M J; Jansson, A; Julliand, V; Luthersson, N; Santos, A S; Vervuert, I
2017-06-01
The horse is a non-ruminant herbivore adapted to eating plant-fibre or forage-based diets. Some horses are stabled for most or the majority of the day with limited or no access to fresh pasture and are fed preserved forage typically as hay or haylage and sometimes silage. This raises questions with respect to the quality and suitability of these preserved forages (considering production, nutritional content, digestibility as well as hygiene) and required quantities. Especially for performance horses, forage is often replaced with energy dense feedstuffs which can result in a reduction in the proportion of the diet that is forage based. This may adversely affect the health, welfare, behaviour and even performance of the horse. In the past 20 years a large body of research work has contributed to a better and deeper understanding of equine forage needs and the physiological and behavioural consequences if these are not met. Recent nutrient requirement systems have incorporated some, but not all, of this new knowledge into their recommendations. This review paper amalgamates recommendations based on the latest understanding in forage feeding for horses, defining forage types and preservation methods, hygienic quality, feed intake behaviour, typical nutrient composition, digestion and digestibility as well as health and performance implications. Based on this, consensual applied recommendations for feeding preserved forages are provided.
A sampling and classification item selection approach with content balancing.
Chen, Pei-Hua
2015-03-01
Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM.
Low-Melt Poly(Amic Acids) and Polyimides and Their Uses
NASA Technical Reports Server (NTRS)
Parrish, Clyde F. (Inventor); Jolley, Scott T. (Inventor); Gibson, Tracy L. (Inventor); Snyder, Sarah J. (Inventor); Williams, Martha K. (Inventor)
2016-01-01
Provided are low-melt polyimides and poly(amic acids) (PAAs) for use as adhesives, and methods of using the materials for attaching two substrates. The methods typically form an adhesive bond that is hermetically sealed to both substrates. Additionally, the method typically forms a cross-linked bonding material that is flexible.
Amador, Carolina; Urban, Matthew W; Chen, Shigao; Greenleaf, James F
2012-01-01
Elasticity imaging methods have been used to study tissue mechanical properties and have demonstrated that tissue elasticity changes with disease state. In current shear wave elasticity imaging methods typically only shear wave speed is measured and rheological models, e.g., Kelvin-Voigt, Maxwell and Standard Linear Solid, are used to solve for tissue mechanical properties such as the shear viscoelastic complex modulus. This paper presents a method to quantify viscoelastic material properties in a model-independent way by estimating the complex shear elastic modulus over a wide frequency range using time-dependent creep response induced by acoustic radiation force. This radiation force induced creep (RFIC) method uses a conversion formula that is the analytic solution of a constitutive equation. The proposed method in combination with Shearwave Dispersion Ultrasound Vibrometry (SDUV) is used to measure the complex modulus so that knowledge of the applied radiation force magnitude is not necessary. The conversion formula is shown to be sensitive to sampling frequency and the first reliable measure in time according to numerical simulations using the Kelvin-Voigt model creep strain and compliance. Representative model-free shear complex moduli from homogeneous tissue mimicking phantoms and one excised swine kidney were obtained. This work proposes a novel model-free ultrasound-based elasticity method that does not require a rheological model with associated fitting requirements. PMID:22345425
Second-order variational equations for N-body simulations
NASA Astrophysics Data System (ADS)
Rein, Hanno; Tamayo, Daniel
2016-07-01
First-order variational equations are widely used in N-body simulations to study how nearby trajectories diverge from one another. These allow for efficient and reliable determinations of chaos indicators such as the Maximal Lyapunov characteristic Exponent (MLE) and the Mean Exponential Growth factor of Nearby Orbits (MEGNO). In this paper we lay out the theoretical framework to extend the idea of variational equations to higher order. We explicitly derive the differential equations that govern the evolution of second-order variations in the N-body problem. Going to second order opens the door to new applications, including optimization algorithms that require the first and second derivatives of the solution, like the classical Newton's method. Typically, these methods have faster convergence rates than derivative-free methods. Derivatives are also required for Riemann manifold Langevin and Hamiltonian Monte Carlo methods which provide significantly shorter correlation times than standard methods. Such improved optimization methods can be applied to anything from radial-velocity/transit-timing-variation fitting to spacecraft trajectory optimization to asteroid deflection. We provide an implementation of first- and second-order variational equations for the publicly available REBOUND integrator package. Our implementation allows the simultaneous integration of any number of first- and second-order variational equations with the high-accuracy IAS15 integrator. We also provide routines to generate consistent and accurate initial conditions without the need for finite differencing.
Amador, Carolina; Urban, Matthew W; Chen, Shigao; Greenleaf, James F
2012-03-07
Elasticity imaging methods have been used to study tissue mechanical properties and have demonstrated that tissue elasticity changes with disease state. In current shear wave elasticity imaging methods typically only shear wave speed is measured and rheological models, e.g. Kelvin-Voigt, Maxwell and Standard Linear Solid, are used to solve for tissue mechanical properties such as the shear viscoelastic complex modulus. This paper presents a method to quantify viscoelastic material properties in a model-independent way by estimating the complex shear elastic modulus over a wide frequency range using time-dependent creep response induced by acoustic radiation force. This radiation force induced creep method uses a conversion formula that is the analytic solution of a constitutive equation. The proposed method in combination with shearwave dispersion ultrasound vibrometry is used to measure the complex modulus so that knowledge of the applied radiation force magnitude is not necessary. The conversion formula is shown to be sensitive to sampling frequency and the first reliable measure in time according to numerical simulations using the Kelvin-Voigt model creep strain and compliance. Representative model-free shear complex moduli from homogeneous tissue mimicking phantoms and one excised swine kidney were obtained. This work proposes a novel model-free ultrasound-based elasticity method that does not require a rheological model with associated fitting requirements.
Guzmán-Larralde, Adriana J; Suaste-Dzul, Alba P; Gallou, Adrien; Peña-Carrillo, Kenzy I
2017-01-01
Because of the tiny size of microhymenoptera, successful morphological identification typically requires specific mounting protocols that require time, skills, and experience. Molecular taxonomic identification is an alternative, but many DNA extraction protocols call for maceration of the whole specimen, which is not compatible with preserving museum vouchers. Thus, non-destructive DNA isolation methods are attractive alternatives for obtaining DNA without damaging sample individuals. However, their performance needs to be assessed in microhymenopterans. We evaluated six non-destructive methods: (A) DNeasy® Blood & Tissue Kit; (B) DNeasy® Blood & Tissue Kit, modified; (C) Protocol with CaCl 2 buffer; (D) Protocol with CaCl 2 buffer, modified; (E) HotSHOT; and (F) Direct PCR. The performance of each DNA extraction method was tested across several microhymenopteran species by attempting to amplify the mitochondrial gene COI from insect specimens of varying ages: 1 day, 4 months, 3 years, 12 years, and 23 years. Methods B and D allowed COI amplification in all insects, while methods A, C, and E were successful in DNA amplification from insects up to 12 years old. Method F, the fastest, was useful in insects up to 4 months old. Finally, we adapted permanent slide preparation in Canada balsam for every technique. The results reported allow for combining morphological and molecular methodologies for taxonomic studies.
NASA Astrophysics Data System (ADS)
Desnijder, Karel; Hanselaer, Peter; Meuret, Youri
2016-04-01
A key requirement to obtain a uniform luminance for a side-lit LED backlight is the optimised spatial pattern of structures on the light guide that extract the light. The generation of such a scatter pattern is usually performed by applying an iterative approach. In each iteration, the luminance distribution of the backlight with a particular scatter pattern is analysed. This is typically performed with a brute-force ray-tracing algorithm, although this approach results in a time-consuming optimisation process. In this study, the Adding-Doubling method is explored as an alternative way for evaluating the luminance of a backlight. Due to the similarities between light propagating in a backlight with extraction structures and light scattering in a cloud of light scatterers, the Adding-Doubling method which is used to model the latter could also be used to model the light distribution in a backlight. The backlight problem is translated to a form upon which the Adding-Doubling method is directly applicable. The calculated luminance for a simple uniform extraction pattern with the Adding-Doubling method matches the luminance generated by a commercial raytracer very well. Although successful, no clear computational advantage over ray tracers is realised. However, the dynamics of light propagation in a light guide as used the Adding-Doubling method, also allow to enhance the efficiency of brute-force ray-tracing algorithms. The performance of this enhanced ray-tracing approach for the simulation of backlights is also evaluated against a typical brute-force ray-tracing approach.
Special nuclear material simulation device
Leckey, John H.; DeMint, Amy; Gooch, Jack; Hawk, Todd; Pickett, Chris A.; Blessinger, Chris; York, Robbie L.
2014-08-12
An apparatus for simulating special nuclear material is provided. The apparatus typically contains a small quantity of special nuclear material (SNM) in a configuration that simulates a much larger quantity of SNM. Generally the apparatus includes a spherical shell that is formed from an alloy containing a small quantity of highly enriched uranium. Also typically provided is a core of depleted uranium. A spacer, typically aluminum, may be used to separate the depleted uranium from the shell of uranium alloy. A cladding, typically made of titanium, is provided to seal the source. Methods are provided to simulate SNM for testing radiation monitoring portals. Typically the methods use at least one primary SNM spectral line and exclude at least one secondary SNM spectral line.
Histogram-driven cupping correction (HDCC) in CT
NASA Astrophysics Data System (ADS)
Kyriakou, Y.; Meyer, M.; Lapp, R.; Kalender, W. A.
2010-04-01
Typical cupping correction methods are pre-processing methods which require either pre-calibration measurements or simulations of standard objects to approximate and correct for beam hardening and scatter. Some of them require the knowledge of spectra, detector characteristics, etc. The aim of this work was to develop a practical histogram-driven cupping correction (HDCC) method to post-process the reconstructed images. We use a polynomial representation of the raw-data generated by forward projection of the reconstructed images; forward and backprojection are performed on graphics processing units (GPU). The coefficients of the polynomial are optimized using a simplex minimization of the joint entropy of the CT image and its gradient. The algorithm was evaluated using simulations and measurements of homogeneous and inhomogeneous phantoms. For the measurements a C-arm flat-detector CT (FD-CT) system with a 30×40 cm2 detector, a kilovoltage on board imager (radiation therapy simulator) and a micro-CT system were used. The algorithm reduced cupping artifacts both in simulations and measurements using a fourth-order polynomial and was in good agreement to the reference. The minimization algorithm required less than 70 iterations to adjust the coefficients only performing a linear combination of basis images, thus executing without time consuming operations. HDCC reduced cupping artifacts without the necessity of pre-calibration or other scan information enabling a retrospective improvement of CT image homogeneity. However, the method can work with other cupping correction algorithms or in a calibration manner, as well.
Quantifying reproducibility in computational biology: the case of the tuberculosis drugome.
Garijo, Daniel; Kinnings, Sarah; Xie, Li; Xie, Lei; Zhang, Yinliang; Bourne, Philip E; Gil, Yolanda
2013-01-01
How easy is it to reproduce the results found in a typical computational biology paper? Either through experience or intuition the reader will already know that the answer is with difficulty or not at all. In this paper we attempt to quantify this difficulty by reproducing a previously published paper for different classes of users (ranging from users with little expertise to domain experts) and suggest ways in which the situation might be improved. Quantification is achieved by estimating the time required to reproduce each of the steps in the method described in the original paper and make them part of an explicit workflow that reproduces the original results. Reproducing the method took several months of effort, and required using new versions and new software that posed challenges to reconstructing and validating the results. The quantification leads to "reproducibility maps" that reveal that novice researchers would only be able to reproduce a few of the steps in the method, and that only expert researchers with advance knowledge of the domain would be able to reproduce the method in its entirety. The workflow itself is published as an online resource together with supporting software and data. The paper concludes with a brief discussion of the complexities of requiring reproducibility in terms of cost versus benefit, and a desiderata with our observations and guidelines for improving reproducibility. This has implications not only in reproducing the work of others from published papers, but reproducing work from one's own laboratory.
Robinson, Eleanor M; Trumble, Stephen J; Subedi, Bikram; Sanders, Rebel; Usenko, Sascha
2013-12-06
Lipid-rich matrices are often sinks for lipophilic contaminants, such as pesticides, polychlorinated biphenyls (PCBs), and polybrominated diphenyl ethers (PBDEs). Typically methods for contaminant extraction and cleanup for lipid-rich matrices require multiple cleanup steps; however, a selective pressurized liquid extraction (SPLE) technique requiring no additional cleanup has been developed for the simultaneous extraction and cleanup of whale earwax (cerumen; a lipid-rich matrix). Whale earwax accumulates in select whale species over their lifetime to form wax earplugs. Typically used as an aging technique in cetaceans, layers or laminae that comprise the earplug are thought to be associated with annual or semiannual migration and feeding patterns. Whale earplugs (earwax) represent a unique matrix capable of recording and archiving whales' lifetime contaminant profiles. This study reports the first analytical method developed for identifying and quantifying lipophilic persistent organic pollutants (POPs) in a whale earplug including organochlorine pesticides, polychlorinated biphenyls (PCBs), and polybrominated diphenyl ethers (PBDEs). The analytical method was developed using SPLE to extract contaminants from ∼0.25 to 0.5g aliquots of each lamina of sectioned earplug. The SPLE was optimized for cleanup adsorbents (basic alumina, silica gel, and Florisil(®)), adsorbent to sample ratio, and adsorbent order. In the optimized SPLE method, the earwax homogenate was placed within the extraction cell on top of basic alumina (5g), silica gel (15g), and Florisil(®) (10g) and the target analytes were extracted from the homogenate using 1:1 (v/v) dichloromethane:hexane. POPs were analyzed using gas chromatography-mass spectrometry with electron capture negative ionization and electron impact ionization. The average percent recoveries for the POPs were 91% (±6% relative standard deviation), while limits of detection and quantification ranged from 0.00057 to 0.96ngg(-1) and 0.0017 to 2.9ngg(-1), respectively. Pesticides, PCBs, and PBDEs, were measured in a single blue whale (Balaenoptera musculus) cerumen lamina at concentrations ranging from 0.11 to 150ng g(-1). Copyright © 2013 Elsevier B.V. All rights reserved.
Face-to-face interference in typical and atypical development
Riby, Deborah M; Doherty-Sneddon, Gwyneth; Whittle, Lisa
2012-01-01
Visual communication cues facilitate interpersonal communication. It is important that we look at faces to retrieve and subsequently process such cues. It is also important that we sometimes look away from faces as they increase cognitive load that may interfere with online processing. Indeed, when typically developing individuals hold face gaze it interferes with task completion. In this novel study we quantify face interference for the first time in Williams syndrome (WS) and Autism Spectrum Disorder (ASD). These disorders of development impact on cognition and social attention, but how do faces interfere with cognitive processing? Individuals developing typically as well as those with ASD (n = 19) and WS (n = 16) were recorded during a question and answer session that involved mathematics questions. In phase 1 gaze behaviour was not manipulated, but in phase 2 participants were required to maintain eye contact with the experimenter at all times. Looking at faces decreased task accuracy for individuals who were developing typically. Critically, the same pattern was seen in WS and ASD, whereby task performance decreased when participants were required to hold face gaze. The results show that looking at faces interferes with task performance in all groups. This finding requires the caveat that individuals with WS and ASD found it harder than individuals who were developing typically to maintain eye contact throughout the interaction. Individuals with ASD struggled to hold eye contact at all points of the interaction while those with WS found it especially difficult when thinking. PMID:22356183
Non-homogeneous updates for the iterative coordinate descent algorithm
NASA Astrophysics Data System (ADS)
Yu, Zhou; Thibault, Jean-Baptiste; Bouman, Charles A.; Sauer, Ken D.; Hsieh, Jiang
2007-02-01
Statistical reconstruction methods show great promise for improving resolution, and reducing noise and artifacts in helical X-ray CT. In fact, statistical reconstruction seems to be particularly valuable in maintaining reconstructed image quality when the dosage is low and the noise is therefore high. However, high computational cost and long reconstruction times remain as a barrier to the use of statistical reconstruction in practical applications. Among the various iterative methods that have been studied for statistical reconstruction, iterative coordinate descent (ICD) has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a novel method for further speeding the convergence of the ICD algorithm, and therefore reducing the overall reconstruction time for statistical reconstruction. The method, which we call nonhomogeneous iterative coordinate descent (NH-ICD) uses spatially non-homogeneous updates to speed convergence by focusing computation where it is most needed. Experimental results with real data indicate that the method speeds reconstruction by roughly a factor of two for typical 3D multi-slice geometries.
X-ray phase-contrast tomography for high-spatial-resolution zebrafish muscle imaging
NASA Astrophysics Data System (ADS)
Vågberg, William; Larsson, Daniel H.; Li, Mei; Arner, Anders; Hertz, Hans M.
2015-11-01
Imaging of muscular structure with cellular or subcellular detail in whole-body animal models is of key importance for understanding muscular disease and assessing interventions. Classical histological methods for high-resolution imaging methods require excision, fixation and staining. Here we show that the three-dimensional muscular structure of unstained whole zebrafish can be imaged with sub-5 μm detail with X-ray phase-contrast tomography. Our method relies on a laboratory propagation-based phase-contrast system tailored for detection of low-contrast 4-6 μm subcellular myofibrils. The method is demonstrated on 20 days post fertilization zebrafish larvae and comparative histology confirms that we resolve individual myofibrils in the whole-body animal. X-ray imaging of healthy zebrafish show the expected structured muscle pattern while specimen with a dystrophin deficiency (sapje) displays an unstructured pattern, typical of Duchenne muscular dystrophy. The method opens up for whole-body imaging with sub-cellular detail also of other types of soft tissue and in different animal models.
NASA Technical Reports Server (NTRS)
Macfarlane, J. J.
1992-01-01
We investigate the convergence properties of Lambda-acceleration methods for non-LTE radiative transfer problems in planar and spherical geometry. Matrix elements of the 'exact' A-operator are used to accelerate convergence to a solution in which both the radiative transfer and atomic rate equations are simultaneously satisfied. Convergence properties of two-level and multilevel atomic systems are investigated for methods using: (1) the complete Lambda-operator, and (2) the diagonal of the Lambda-operator. We find that the convergence properties for the method utilizing the complete Lambda-operator are significantly better than those of the diagonal Lambda-operator method, often reducing the number of iterations needed for convergence by a factor of between two and seven. However, the overall computational time required for large scale calculations - that is, those with many atomic levels and spatial zones - is typically a factor of a few larger for the complete Lambda-operator method, suggesting that the approach should be best applied to problems in which convergence is especially difficult.
Matrix completion-based reconstruction for undersampled magnetic resonance fingerprinting data.
Doneva, Mariya; Amthor, Thomas; Koken, Peter; Sommer, Karsten; Börnert, Peter
2017-09-01
An iterative reconstruction method for undersampled magnetic resonance fingerprinting data is presented. The method performs the reconstruction entirely in k-space and is related to low rank matrix completion methods. A low dimensional data subspace is estimated from a small number of k-space locations fully sampled in the temporal direction and used to reconstruct the missing k-space samples before MRF dictionary matching. Performing the iterations in k-space eliminates the need for applying a forward and an inverse Fourier transform in each iteration required in previously proposed iterative reconstruction methods for undersampled MRF data. A projection onto the low dimensional data subspace is performed as a matrix multiplication instead of a singular value thresholding typically used in low rank matrix completion, further reducing the computational complexity of the reconstruction. The method is theoretically described and validated in phantom and in-vivo experiments. The quality of the parameter maps can be significantly improved compared to direct matching on undersampled data. Copyright © 2017 Elsevier Inc. All rights reserved.
Human exposures to monomers resulting from consumer contact with polymers.
Leber, A P
2001-06-01
Many consumer products are composed completely, or in part, of polymeric materials. Direct or indirect human contact results in potential exposures to monomers as a result of migrations of trace amounts from the polymeric matrix into foods, into the skin or other bodily surfaces. Typically, residual monomer levels in these polymers are <100 p.p.m., and represent exposures well below those observable in traditional toxicity testing. These product applications thus require alternative methods for evaluating health risks relating to monomer exposures. A typical approach includes: (a) assessment of potential human contacts for specific polymer uses; (b) utilization of data from toxicity testing of pure monomers, e.g. cancer bioassay results; and (c) mathematical risk assessment methods. Exposure potentials are measured in one of two analytical procedures: (1) migration of monomer from polymer into a simulant solvent (e.g. alcohol, acidic water, vegetable oil) appropriate for the intended use of the product (e.g. beer cans, food jars, packaging adhesive, dairy hose); or (2) total monomer content of the polymer, providing worse-case values for migratable monomer. Application of toxicity data typically involves NOEL or benchmark values for non-cancer endpoints, or tumorigenicity potencies for monomers demonstrated to be carcinogens. Risk assessments provide exposure 'safety margin' ratios between levels that: (1) are projected to be safe according to toxicity information, and (2) are potential monomer exposures posed by the intended use of the consumer product. This paper includes an example of a health risk assessment for a chewing gum polymer for which exposures to trace levels of butadiene monomer occur.
Methods of viscosity measurements in sealed ampoules
NASA Astrophysics Data System (ADS)
Mazuruk, Konstantin
1999-07-01
Viscosity of semiconductors and metallic melts is usually measured by oscillating cup method. This method utilizes the melts contained in vacuum sealed silica ampoules, thus the problems related to volatility, contamination, and high temperature and pressure can be alleviate. In a typical design, the time required for a single measurement is of the order of one hour. In order to reduce this time to a minute range, a high resolution angular detection system is implemented in our design of the viscometer. Furthermore, an electromagnet generating a rotational magnetic field (RMF) is incorporated into the apparatus. This magnetic field can be used to remotely and nonintrusively measure the electrical conductivity of the melt. It can also be used to induce a well controlled rotational flow in the system. The transient behavior of this flow can potentially yield of the fluid. Based on RMF implementation, two novel viscometry methods are proposed in this work: a) the transient torque method, b) the resonance method. A unified theoretical approach to the three methods is presented along with the initial test result of the constructed apparatus. Advantages of each of the method are discussed.
Validating the Use of Deep Learning Neural Networks for Correction of Large Hydrometric Datasets
NASA Astrophysics Data System (ADS)
Frazier, N.; Ogden, F. L.; Regina, J. A.; Cheng, Y.
2017-12-01
Collection and validation of Earth systems data can be time consuming and labor intensive. In particular, high resolution hydrometric data, including rainfall and streamflow measurements, are difficult to obtain due to a multitude of complicating factors. Measurement equipment is subject to clogs, environmental disturbances, and sensor drift. Manual intervention is typically required to identify, correct, and validate these data. Weirs can become clogged and the pressure transducer may float or drift over time. We typically employ a graphical tool called Time Series Editor to manually remove clogs and sensor drift from the data. However, this process is highly subjective and requires hydrological expertise. Two different people may produce two different data sets. To use this data for scientific discovery and model validation, a more consistent method is needed to processes this field data. Deep learning neural networks have proved to be excellent mechanisms for recognizing patterns in data. We explore the use of Recurrent Neural Networks (RNN) to capture the patterns in the data over time using various gating mechanisms (LSTM and GRU), network architectures, and hyper-parameters to build an automated data correction model. We also explore the required amount of manually corrected training data required to train the network for reasonable accuracy. The benefits of this approach are that the time to process a data set is significantly reduced, and the results are 100% reproducible after training is complete. Additionally, we train the RNN and calibrate a physically-based hydrological model against the same portion of data. Both the RNN and the model are applied to the remaining data using a split-sample methodology. Performance of the machine learning is evaluated for plausibility by comparing with the output of the hydrological model, and this analysis identifies potential periods where additional investigation is warranted.
Affinity+: Semi-Structured Brainstorming on Large Displays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burtner, Edwin R.; May, Richard A.; Scarberry, Randall E.
2013-04-27
Affinity diagraming is a powerful method for encouraging and capturing lateral thinking in a group environment. The Affinity+ Concept was designed to improve the collaborative brainstorm process through the use of large display surfaces in conjunction with mobile devices like smart phones and tablets. The system works by capturing the ideas digitally and allowing users to sort and group them on a large touch screen manually. Additionally, Affinity+ incorporates theme detection, topic clustering, and other processing algorithms that help bring structured analytic techniques to the process without requiring explicit leadership roles and other overhead typically involved in these activities.
Digital-computer program for design analysis of salient, wound pole alternators
NASA Technical Reports Server (NTRS)
Repas, D. S.
1973-01-01
A digital computer program for analyzing the electromagnetic design of salient, wound pole alternators is presented. The program, which is written in FORTRAN 4, calculates the open-circuit saturation curve, the field-current requirements at rated voltage for various loads and losses, efficiency, reactances, time constants, and weights. The methods used to calculate some of these items are presented or appropriate references are cited. Instructions for using the program and typical program input and output for an alternator design are given, and an alphabetical list of most FORTRAN symbols and the complete program listing with flow charts are included.
Application of up-sampling and resolution scaling to Fresnel reconstruction of digital holograms.
Williams, Logan A; Nehmetallah, Georges; Aylo, Rola; Banerjee, Partha P
2015-02-20
Fresnel transform implementation methods using numerical preprocessing techniques are investigated in this paper. First, it is shown that up-sampling dramatically reduces the minimum reconstruction distance requirements and allows maximal signal recovery by eliminating aliasing artifacts which typically occur at distances much less than the Rayleigh range of the object. Second, zero-padding is employed to arbitrarily scale numerical resolution for the purpose of resolution matching multiple holograms, where each hologram is recorded using dissimilar geometric or illumination parameters. Such preprocessing yields numerical resolution scaling at any distance. Both techniques are extensively illustrated using experimental results.
NASA Astrophysics Data System (ADS)
Gerde, Janice R.; Christens-Barry, William A.
2011-08-01
In a project to meet requirements for CBP Laboratory analysis of footwear under the Harmonized Tariff Schedule of the United States (HTSUS), a hybrid metrology system comprising both optical and touch probe devices has been assembled. A unique requirement must be met: To identify the interface-typically obscured in samples of concern-of the "external surface area upper" (ESAU) and the sole without physically destroying the sample. The sample outer surface is determined by discrete point cloud coordinates obtained using laser scanner optical measurements. Measurements from the optically inaccessible insole region are obtained using a coordinate measuring machine (CMM). That surface similarly is defined by point cloud data. Mathematically, the individual CMM and scanner data sets are transformed into a single, common reference frame. Custom software then fits a polynomial surface to the insole data and extends it to intersect the mesh fitted to the outer surface point cloud. This line of intersection defines the required ESAU boundary, thus permitting further fractional area calculations to determine the percentage of materials present. With a draft method in place, and first-level method validation underway, we examine the transformation of the two dissimilar data sets into the single, common reference frame. We also will consider the six previously-identified potential error factors versus the method process. This paper reports our on-going work and discusses our findings to date.
Performance limitations of label-free sensors in molecular diagnosis using complex samples
NASA Astrophysics Data System (ADS)
Varma, Manoj
2016-03-01
Label-free biosensors promised a paradigm involving direct detection of biomarkers from complex samples such as serum without requiring multistep sample processing typical of labelled methods such as ELISA or immunofluorescence assays. Label-free sensors have witnessed decades of development with a veritable zoo of techniques available today exploiting a multitude of physical effects. It is appropriate now to critically assess whether label-free technologies have succeeded in delivering their promise with respect to diagnostic applications, particularly, ambitious goals such as early cancer detection using serum biomarkers, which require low limits of detection (LoD). Comparison of nearly 120 limits of detection (LoD) values reported by labelled and label-free sensing approaches over a wide range of detection techniques and target molecules in serum revealed that labeled techniques achieve 2-3 orders of magnitude better LoDs. Data from experiments where labelled and label-free assays were performed simultaneously using the same assay parameters also confirm that the LoD achieved by labelled techniques is 2 to 3 orders of magnitude better than that by label-free techniques. Furthermore, label-free techniques required significant signal amplification, for e.g. using nanoparticle conjugated secondary antibodies, to achieve LoDs comparable to labelled methods substantially deviating from the original "direct detection" paradigm. This finding has important implications on the practical limits of applying label-free detection methods for molecular diagnosis.
WALSH, TIMOTHY F.; JONES, ANDREA; BHARDWAJ, MANOJ; ...
2013-04-01
Finite element analysis of transient acoustic phenomena on unbounded exterior domains is very common in engineering analysis. In these problems there is a common need to compute the acoustic pressure at points outside of the acoustic mesh, since meshing to points of interest is impractical in many scenarios. In aeroacoustic calculations, for example, the acoustic pressure may be required at tens or hundreds of meters from the structure. In these cases, a method is needed for post-processing the acoustic results to compute the response at far-field points. In this paper, we compare two methods for computing far-field acoustic pressures, onemore » derived directly from the infinite element solution, and the other from the transient version of the Kirchhoff integral. Here, we show that the infinite element approach alleviates the large storage requirements that are typical of Kirchhoff integral and related procedures, and also does not suffer from loss of accuracy that is an inherent part of computing numerical derivatives in the Kirchhoff integral. In order to further speed up and streamline the process of computing the acoustic response at points outside of the mesh, we also address the nonlinear iterative procedure needed for locating parametric coordinates within the host infinite element of far-field points, the parallelization of the overall process, linear solver requirements, and system stability considerations.« less
The design of a microscopic system for typical fluorescent in-situ hybridization applications
NASA Astrophysics Data System (ADS)
Yi, Dingrong; Xie, Shaochuan
2013-12-01
Fluorescence in situ hybridization (FISH) is a modern molecular biology technique used for the detection of genetic abnormalities in terms of the number and structure of chromosomes and genes. The FISH technique is typically employed for prenatal diagnosis of congenital dementia in the Obstetrics and Genecology department. It is also routinely used to pick up qualifying breast cancer patients that are known to be highly curable by the prescription of Her2 targeted therapy. During the microscopic observation phase, the technician needs to count typically green probe dots and red probe dots contained in a single nucleus and calculate their ratio. This procedure need to be done to over hundreds of nuclei. Successful implementation of FISH tests critically depends on a suitable fluorescent microscope which is primarily imported from overseas due to the complexity of such a system beyond the maturity of the domestic optoelectrical industry. In this paper, the typical requirements of a fluorescent microscope that is suitable for FISH applications are first reviewed. The focus of this paper is on the system design and computational methods of an automatic florescent microscopy with high magnification APO objectives, a fast spinning automatic filter wheel, an automatic shutter, a cooled CCD camera used as a photo-detector, and a software platform for image acquisition, registration, pseudo-color generation, multi-channel fusing and multi-focus fusion. Preliminary results from FISH experiments indicate that this system satisfies routine FISH microscopic observation tasks.
NASA Technical Reports Server (NTRS)
Hart, Angela
2006-01-01
A description of internal cargo integration is presented. The topics include: 1) Typical Cargo for Launch/Disposal; 2) Cargo Delivery Requirements; 3) Cargo Return Requirements; and 4) Vehicle On-Orbit Stay Time.
Engineering Complex Embedded Systems with State Analysis and the Mission Data System
NASA Technical Reports Server (NTRS)
Ingham, Michel D.; Rasmussen, Robert D.; Bennett, Matthew B.; Moncada, Alex C.
2004-01-01
It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer s intent, potentially leading to software errors. This problem is addressed by a systems engineering methodology called State Analysis, which provides a process for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using State Analysis and how these requirements inform the design of the system software, using representative spacecraft examples.
ERIC Educational Resources Information Center
Graeber, Mary
The typical approach to the teaching of an elementary school science methods course for undergraduate students was compared with an experimental approach based upon activities appearing in the Conceptually Oriented Program in Elementary Science (COPES) teacher's guides. The typical approach was characterized by a coverage of many topics and a…
The Canonical Robot Command Language (CRCL).
Proctor, Frederick M; Balakirsky, Stephen B; Kootbally, Zeid; Kramer, Thomas R; Schlenoff, Craig I; Shackleford, William P
2016-01-01
Industrial robots can perform motion with sub-millimeter repeatability when programmed using the teach-and-playback method. While effective, this method requires significant up-front time, tying up the robot and a person during the teaching phase. Off-line programming can be used to generate robot programs, but the accuracy of this method is poor unless supplemented with good calibration to remove systematic errors, feed-forward models to anticipate robot response to loads, and sensing to compensate for unmodeled errors. These increase the complexity and up-front cost of the system, but the payback in the reduction of recurring teach programming time can be worth the effort. This payback especially benefits small-batch, short-turnaround applications typical of small-to-medium enterprises, who need the agility afforded by off-line application development to be competitive against low-cost manual labor. To fully benefit from this agile application tasking model, a common representation of tasks should be used that is understood by all of the resources required for the job: robots, tooling, sensors, and people. This paper describes an information model, the Canonical Robot Command Language (CRCL), which provides a high-level description of robot tasks and associated control and status information.
Efficient SRAM yield optimization with mixture surrogate modeling
NASA Astrophysics Data System (ADS)
Zhongjian, Jiang; Zuochang, Ye; Yan, Wang
2016-12-01
Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.
The Canonical Robot Command Language (CRCL)
Proctor, Frederick M.; Balakirsky, Stephen B.; Kootbally, Zeid; Kramer, Thomas R.; Schlenoff, Craig I.; Shackleford, William P.
2017-01-01
Industrial robots can perform motion with sub-millimeter repeatability when programmed using the teach-and-playback method. While effective, this method requires significant up-front time, tying up the robot and a person during the teaching phase. Off-line programming can be used to generate robot programs, but the accuracy of this method is poor unless supplemented with good calibration to remove systematic errors, feed-forward models to anticipate robot response to loads, and sensing to compensate for unmodeled errors. These increase the complexity and up-front cost of the system, but the payback in the reduction of recurring teach programming time can be worth the effort. This payback especially benefits small-batch, short-turnaround applications typical of small-to-medium enterprises, who need the agility afforded by off-line application development to be competitive against low-cost manual labor. To fully benefit from this agile application tasking model, a common representation of tasks should be used that is understood by all of the resources required for the job: robots, tooling, sensors, and people. This paper describes an information model, the Canonical Robot Command Language (CRCL), which provides a high-level description of robot tasks and associated control and status information. PMID:28529393
NASA Astrophysics Data System (ADS)
An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.
2017-01-01
The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.
Anomaly Detection Based on Local Nearest Neighbor Distance Descriptor in Crowded Scenes
Hu, Shiqiang; Zhang, Huanlong; Luo, Lingkun
2014-01-01
We propose a novel local nearest neighbor distance (LNND) descriptor for anomaly detection in crowded scenes. Comparing with the commonly used low-level feature descriptors in previous works, LNND descriptor has two major advantages. First, LNND descriptor efficiently incorporates spatial and temporal contextual information around the video event that is important for detecting anomalous interaction among multiple events, while most existing feature descriptors only contain the information of single event. Second, LNND descriptor is a compact representation and its dimensionality is typically much lower than the low-level feature descriptor. Therefore, not only the computation time and storage requirement can be accordingly saved by using LNND descriptor for the anomaly detection method with offline training fashion, but also the negative aspects caused by using high-dimensional feature descriptor can be avoided. We validate the effectiveness of LNND descriptor by conducting extensive experiments on different benchmark datasets. Experimental results show the promising performance of LNND-based method against the state-of-the-art methods. It is worthwhile to notice that the LNND-based approach requires less intermediate processing steps without any subsequent processing such as smoothing but achieves comparable event better performance. PMID:25105164
New Methods of Sample Preparation for Atom Probe Specimens
NASA Technical Reports Server (NTRS)
Kuhlman, Kimberly, R.; Kowalczyk, Robert S.; Ward, Jennifer R.; Wishard, James L.; Martens, Richard L.; Kelly, Thomas F.
2003-01-01
Magnetite is a common conductive mineral found on Earth and Mars. Disk-shaped precipitates approximately 40 nm in diameter have been shown to have manganese and aluminum concentrations. Atom-probe field-ion microscopy (APFIM) is the only technique that can potentially quantify the composition of these precipitates. APFIM will be used to characterize geological and planetary materials, analyze samples of interest for geomicrobiology; and, for the metrology of nanoscale instrumentation. Prior to APFIM sample preparation was conducted by electropolishing, the method of sharp shards (MSS), or Bosch process (deep reactive ion etching) with focused ion beam (FIB) milling as a final step. However, new methods are required for difficult samples. Many materials are not easily fabricated using electropolishing, MSS, or the Bosch process, FIB milling is slow and expensive, and wet chemistry and the reactive ion etching are typically limited to Si and other semiconductors. APFIM sample preparation using the dicing saw is commonly used to section semiconductor wafers into individual devices following manufacture. The dicing saw is a time-effective method for preparing high aspect ratio posts of poorly conducting materials. Femtosecond laser micromachining is also suitable for preparation of posts. FIB time required is reduced by about a factor of 10 and multi-tip specimens can easily be fabricated using the dicing saw.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W.; Stewart, James; Todd, Annika
Residential behavior-based (BB) programs use strategies grounded in the behavioral and social sciences to influence household energy use. These may include providing households with real-time or delayed feedback about their energy use; supplying energy efficiency education and tips; rewarding households for reducing their energy use; comparing households to their peers; and establishing games, tournaments, and competitions. BB programs often target multiple energy end uses and encourage energy savings, demand savings, or both. Savings from BB programs are usually a small percentage of energy use, typically less than 5 percent. Utilities will continue to implement residential BB programs as large-scale, randomizedmore » control trials (RCTs); however, some are now experimenting with alternative program designs that are smaller scale; involve new communication channels such as the web, social media, and text messaging; or that employ novel strategies for encouraging behavior change (for example, Facebook competitions). These programs will create new evaluation challenges and may require different evaluation methods than those currently employed to verify any savings they generate. Quasi-experimental methods, however, require stronger assumptions to yield valid savings estimates and may not measure savings with the same degree of validity and accuracy as randomized experiments.« less
Risk Evaluation in the Pre-Phase A Conceptual Design of Spacecraft
NASA Technical Reports Server (NTRS)
Fabisinski, Leo L., III; Maples, Charlotte Dauphne
2010-01-01
Typically, the most important decisions in the design of a spacecraft are made in the earliest stages of its conceptual design the Pre-Phase A stages. It is in these stages that the greatest number of design alternatives is considered, and the greatest number of alternatives is rejected. The focus of Pre-Phase A conceptual development is on the evaluation and comparison of whole concepts and the larger-scale systems comprising those concepts. This comparison typically uses general Figures of Merit (FOMs) to quantify the comparative benefits of designs and alternative design features. Along with mass, performance, and cost, risk should be one of the major FOMs in evaluating design decisions during the conceptual design phases. However, risk is often given inadequate consideration in conceptual design practice. The reasons frequently given for this lack of attention to risk include: inadequate mission definition, lack of rigorous design requirements in early concept phases, lack of fidelity in risk assessment methods, and under-evaluation of risk as a viable FOM for design evaluation. In this paper, the role of risk evaluation in early conceptual design is discussed. The various requirements of a viable risk evaluation tool at the Pre-Phase A level are considered in light of the needs of a typical spacecraft design study. A technique for risk identification and evaluation is presented. The application of the risk identification and evaluation approach to the conceptual design process is discussed. Finally, a computational tool for risk profiling is presented and applied to assess the risk for an existing Pre-Phase A proposal. The resulting profile is compared to the risks identified for the proposal by other means.
Domestic Cats (Felis silvestris catus) Do Not Show Signs of Secure Attachment to Their Owners
Potter, Alice; Mills, Daniel Simon
2015-01-01
The Ainsworth Strange Situation Test (SST) has been widely used to demonstrate that the bond between both children and dogs to their primary carer typically meets the requirements of a secure attachment (i.e. the carer being perceived as a focus of safety and security in otherwise threatening environments), and has been adapted for cats with a similar claim made. However methodological problems in this latter research make the claim that the cat-owner bond is typically a secure attachment, operationally definable by its behaviour in the SST, questionable. We therefore developed an adapted version of the SST with the necessary methodological controls which include a full counterbalance of the procedure. A cross-over design experiment with 20 cat-owner pairs (10 each undertaking one of the two versions of the SST first) and continuous focal sampling was used to record the duration of a range of behavioural states expressed by the cats that might be useful for assessing secure attachment. Since data were not normally distributed, non-parametric analyses were used on those behaviours shown to be reliable across the two versions of the test (which excluded much cat behaviour). Although cats vocalised more when the owner rather the stranger left the cat with the other individual, there was no other evidence consistent with the interpretation of the bond between a cat and its owner meeting the requirements of a secure attachment. These results are consistent with the view that adult cats are typically quite autonomous, even in their social relationships, and not necessarily dependent on others to provide a sense of security and safety. It is concluded that alternative methods need to be developed to characterise the normal psychological features of the cat-owner bond. PMID:26332470
Compressing Spin-Polarized 3He With a Modified Diaphragm Pump
Gentile, T. R.; Rich, D. R.; Thompson, A. K.; Snow, W. M.; Jones, G. L.
2001-01-01
Nuclear spin-polarized 3He gas at pressures on the order of 100 kPa (1 bar) are required for several applications, such as neutron spin filters and magnetic resonance imaging. The metastability-exchange optical pumping (MEOP) method for polarizing 3He gas can rapidly produce highly polarized gas, but the best results are obtained at much lower pressure (~0.1 kPa). We describe a compact compression apparatus for polarized gas that is based on a modified commercial diaphragm pump. The gas is polarized by MEOP at a typical pressure of 0.25 kPa (2.5 mbar), and compressed into a storage cell at a typical pressure of 100 kPa. In the storage cell, we have obtained 20 % to 35 % 3He polarization using pure 3He gas and 35 % to 50 % 3He polarization using 3He-4He mixtures. By maintaining the storage cell at liquid nitrogen temperature during compression, the density has been increased by a factor of four. PMID:27500044
Direct three-dimensional ultrasound-to-video registration using photoacoustic markers
NASA Astrophysics Data System (ADS)
Cheng, Alexis; Kang, Jin U.; Taylor, Russell H.; Boctor, Emad M.
2013-06-01
Modern surgical procedures often have a fusion of video and other imaging modalities to provide the surgeon with information support. This requires interventional guidance equipment and surgical navigation systems to register different tools and devices together, such as stereoscopic endoscopes and ultrasound (US) transducers. In this work, the focus is specifically on the registration between these two devices. Electromagnetic and optical trackers are typically used to acquire this registration, but they have various drawbacks typically leading to target registration errors (TRE) of approximately 3 mm. We introduce photoacoustic markers for direct three-dimensional (3-D) US-to-video registration. The feasibility of this method was demonstrated on synthetic and ex vivo porcine liver, kidney, and fat phantoms with an air-coupled laser and a motorized 3-D US probe. The resulting TRE for each experiment ranged from 380 to 850 μm with standard deviations ranging from 150 to 450 μm. We also discuss a roadmap to bring this system into the surgical setting and possible challenges along the way.
Scaling up of renewable chemicals.
Sanford, Karl; Chotani, Gopal; Danielson, Nathan; Zahn, James A
2016-04-01
The transition of promising technologies for production of renewable chemicals from a laboratory scale to commercial scale is often difficult and expensive. As a result the timeframe estimated for commercialization is typically underestimated resulting in much slower penetration of these promising new methods and products into the chemical industries. The theme of 'sugar is the next oil' connects biological, chemical, and thermochemical conversions of renewable feedstocks to products that are drop-in replacements for petroleum derived chemicals or are new to market chemicals/materials. The latter typically offer a functionality advantage and can command higher prices that result in less severe scale-up challenges. However, for drop-in replacements, price is of paramount importance and competitive capital and operating expenditures are a prerequisite for success. Hence, scale-up of relevant technologies must be interfaced with effective and efficient management of both cell and steel factories. Details involved in all aspects of manufacturing, such as utilities, sterility, product recovery and purification, regulatory requirements, and emissions must be managed successfully. Copyright © 2016 Elsevier Ltd. All rights reserved.
Drag coefficients for loose reactor parts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, L.; Doster, J.M.; Mayo, C.W.
1997-12-01
Loose-part monitoring systems are capable of providing estimates of loose-part mass and energy as well as impact location. Additional information regarding potentially damaging loose parts can be obtained by estimating loose-part velocity on the basis of free motion dynamics within the flow. To estimate the loose-part velocity, the drag coefficient of the part must be known. Traditionally, drag coefficients of three-dimensional bodies are measured in wind tunnels, by towing in free air or liquids, and with drop tests. These methods have disadvantages with respect to measuring drag coefficients for loose parts in that they require a fixed orientation, or themore » flow field is inconsistent with the turbulent flow conditions found in reactor systems. Though drag coefficients for some regularly shaped objects can be found in the literature, many shapes representative of typical loose parts have not been investigated. In this work, drag coefficients are measured for typical loose-part shapes, including bolts, nuts, pins, and hand tools within the flow conditions expected in reactor coolant systems.« less
Brummel, Sean S.; Gillen, Daniel L.
2014-01-01
Due to ethical and logistical concerns it is common for data monitoring committees to periodically monitor accruing clinical trial data to assess the safety, and possibly efficacy, of a new experimental treatment. When formalized, monitoring is typically implemented using group sequential methods. In some cases regulatory agencies have required that primary trial analyses should be based solely on the judgment of an independent review committee (IRC). The IRC assessments can produce difficulties for trial monitoring given the time lag typically associated with receiving assessments from the IRC. This results in a missing data problem wherein a surrogate measure of response may provide useful information for interim decisions and future monitoring strategies. In this paper, we present statistical tools that are helpful for monitoring a group sequential clinical trial with missing IRC data. We illustrate the proposed methodology in the case of binary endpoints under various missingness mechanisms including missing completely at random assessments and when missingness depends on the IRC’s measurement. PMID:25540717
A High-Resolution Measurement of Ball IR Black Paint's Low-Temperature Emissivity
NASA Technical Reports Server (NTRS)
Tuttle, Jim; Canavan, Ed; DiPirro, Mike; Li, Xiaoyi; Franck, Randy; Green, Dan
2011-01-01
High-emissivity paints are commonly used on thermal control system components. The total hemispheric emissivity values of such paints are typically high (nearly 1) at temperatures above about 100 Kelvin, but they drop off steeply at lower temperatures. A precise knowledge of this temperature-dependence is critical to designing passively-cooled components with low operating temperatures. Notable examples are the coatings on thermal radiators used to cool space-flight instruments to temperatures below 40 Kelvin. Past measurements of low-temperature paint emissivity have been challenging, often requiring large thermal chambers and typically producing data with high uncertainties below about 100 Kelvin. We describe a relatively inexpensive method of performing high-resolution emissivity measurements in a small cryostat. We present the results of such a measurement on Ball InfraRed BlackTM(BIRBTM), a proprietary surface coating produced by Ball Aerospace and Technologies Corp (BATC), which is used in spaceflight applications. We also describe a thermal model used in the error analysis.
Oxidative aliphatic C-H fluorination with manganese catalysts and fluoride ion
Liu, Wei; Huang, Xiongyi; Groves, John T
2014-01-01
Fluorination is a reaction that is useful in improving the chemical stability and changing the binding affinity of biologically active compounds. The protocol described here can be used to replace aliphatic, C(sp3)-H hydrogen in small molecules with fluorine. Notably, isolated methylene groups and unactivated benzylic sites are accessible. The method uses readily available manganese porphyrin and manganese salen catalysts and various fluoride ion reagents, including silver fluoride (AgF), tetrabutylammonium fluoride and triethylamine trihydrofluoride (TREAT·HF), as the source of fluorine. Typically, the reactions afford 50–70% yield of mono-fluorinated products in one step. Two representative examples, the fragrance component celestolide and the nonsteroidal anti-inflammatory drug ibuprofen, are described; they produced useful isolated quantities (250–300 mg, ~50% yield) of fluorinated material over periods of 1–8 h. The procedures are performed in a typical fume hood using ordinary laboratory glassware. No special precautions to rigorously exclude water are required. PMID:24177292
Saha, Krishanu; Mei, Ying; Reisterer, Colin M; Pyzocha, Neena Kenton; Yang, Jing; Muffat, Julien; Davies, Martyn C; Alexander, Morgan R; Langer, Robert; Anderson, Daniel G; Jaenisch, Rudolf
2011-11-15
The current gold standard for the culture of human pluripotent stem cells requires the use of a feeder layer of cells. Here, we develop a spatially defined culture system based on UV/ozone radiation modification of typical cell culture plastics to define a favorable surface environment for human pluripotent stem cell culture. Chemical and geometrical optimization of the surfaces enables control of early cell aggregation from fully dissociated cells, as predicted from a numerical model of cell migration, and results in significant increases in cell growth of undifferentiated cells. These chemically defined xeno-free substrates generate more than three times the number of cells than feeder-containing substrates per surface area. Further, reprogramming and typical gene-targeting protocols can be readily performed on these engineered surfaces. These substrates provide an attractive cell culture platform for the production of clinically relevant factor-free reprogrammed cells from patient tissue samples and facilitate the definition of standardized scale-up friendly methods for disease modeling and cell therapeutic applications.
Reducing seed dependent variability of non-uniformly sampled multidimensional NMR data
NASA Astrophysics Data System (ADS)
Mobli, Mehdi
2015-07-01
The application of NMR spectroscopy to study the structure, dynamics and function of macromolecules requires the acquisition of several multidimensional spectra. The one-dimensional NMR time-response from the spectrometer is extended to additional dimensions by introducing incremented delays in the experiment that cause oscillation of the signal along "indirect" dimensions. For a given dimension the delay is incremented at twice the rate of the maximum frequency (Nyquist rate). To achieve high-resolution requires acquisition of long data records sampled at the Nyquist rate. This is typically a prohibitive step due to time constraints, resulting in sub-optimal data records to the detriment of subsequent analyses. The multidimensional NMR spectrum itself is typically sparse, and it has been shown that in such cases it is possible to use non-Fourier methods to reconstruct a high-resolution multidimensional spectrum from a random subset of non-uniformly sampled (NUS) data. For a given acquisition time, NUS has the potential to improve the sensitivity and resolution of a multidimensional spectrum, compared to traditional uniform sampling. The improvements in sensitivity and/or resolution achieved by NUS are heavily dependent on the distribution of points in the random subset acquired. Typically, random points are selected from a probability density function (PDF) weighted according to the NMR signal envelope. In extreme cases as little as 1% of the data is subsampled. The heavy under-sampling can result in poor reproducibility, i.e. when two experiments are carried out where the same number of random samples is selected from the same PDF but using different random seeds. Here, a jittered sampling approach is introduced that is shown to improve random seed dependent reproducibility of multidimensional spectra generated from NUS data, compared to commonly applied NUS methods. It is shown that this is achieved due to the low variability of the inherent sensitivity of the random subset chosen from a given PDF. Finally, it is demonstrated that metrics used to find optimal NUS distributions are heavily dependent on the inherent sensitivity of the random subset, and such optimisation is therefore less critical when using the proposed sampling scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sherman, Max H.; Walker, Iain S.
Duct leakage has been identified as a major source of energy loss in residential buildings. Most duct leakage occurs at the connections to registers, plenums or branches in the duct system. At each of these connections a method of sealing the duct system is required. Typical sealing methods include tapes or mastics applied around the joints in the system. Field examinations of duct systems have typically shown that these seals tend to fail over extended periods of time. The Lawrence Berkeley National Laboratory has been testing sealant durability for several years. Typical duct tape (i.e. fabric backed tapes with naturalmore » rubber adhesives) was found to fail more rapidly than all other duct sealants. This report summarizes the results of duct sealant durability testing of five UL 181B-FX listed duct tapes (three cloth tapes, a foil tape and an Oriented Polypropylene (OPP) tape). One of the cloth tapes was specifically developed in collaboration with a tape manufacturer to perform better in our durability testing. The first test involved the aging of common ''core-to-collar joints'' of flexible duct to sheet metal collars, and sheet metal ''collar-to-plenum joints'' pressurized with 200 F (93 C) air. The second test consisted of baking duct tape specimens in a constant 212 F (100 C) oven following the UL 181B-FX ''Temperature Test'' requirements. Additional tests were also performed on only two tapes using sheet metal collar-to-plenum joints. Since an unsealed flexible duct joint can have a variable leakage depending on the positioning of the flexible duct core, the durability of the flexible duct joints could not be based on the 10% of unsealed leakage criteria. Nevertheless, the leakage of the sealed specimens prior to testing could be considered as a basis for a failure criteria. Visual inspection was also documented throughout the tests. The flexible duct core-to-collar joints were inspected monthly, while the sheet metal collar-to-plenum joints were inspected weekly. The baking test specimens were visually inspected weekly, and the durability was judged by the observed deterioration in terms of brittleness, cracking, flaking and blistering (the terminology used in the UL 181B-FX test procedure).« less
A convenient and accurate parallel Input/Output USB device for E-Prime.
Canto, Rosario; Bufalari, Ilaria; D'Ausilio, Alessandro
2011-03-01
Psychological and neurophysiological experiments require the accurate control of timing and synchrony for Input/Output signals. For instance, a typical Event-Related Potential (ERP) study requires an extremely accurate synchronization of stimulus delivery with recordings. This is typically done via computer software such as E-Prime, and fast communications are typically assured by the Parallel Port (PP). However, the PP is an old and disappearing technology that, for example, is no longer available on portable computers. Here we propose a convenient USB device enabling parallel I/O capabilities. We tested this device against the PP on both a desktop and a laptop machine in different stress tests. Our data demonstrate the accuracy of our system, which suggests that it may be a good substitute for the PP with E-Prime.
Vodovatov, A V; Balonov, M I; Golikov, V Yu; Shatsky, I G; Chipiga, L A; Bernhardsson, C
2017-04-01
In 2009-2014, dose surveys aimed to collect adult patient data and parameters of most common radiographic examinations were performed in six Russian regions. Typical patient doses were estimated for the selected examinations both in entrance surface dose and in effective dose. 75%-percentiles of typical patient effective dose distributions were proposed as preliminary regional diagnostic reference levels (DRLs) for radiography. Differences between the 75%-percentiles of regional typical patient dose distributions did not exceed 30-50% for the examinations with standardized clinical protocols (skull, chest and thoracic spine) and a factor of 1.5 for other examinations. Two different approaches for establishing national DRLs were evaluated: as a 75%-percentile of a pooled regional sample of patient typical doses (pooled method) and as a median of 75%-percentiles of regional typical patient dose distributions (median method). Differences between pooled and median methods for effective dose did not exceed 20%. It was proposed to establish Russian national DRLs in effective dose using a pooled method. In addition, the local authorities were granted an opportunity to establish regional DRLs if the local radiological practice and typical patient dose distributions are significantly different. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Numerical simulation of immiscible viscous fingering using adaptive unstructured meshes
NASA Astrophysics Data System (ADS)
Adam, A.; Salinas, P.; Percival, J. R.; Pavlidis, D.; Pain, C.; Muggeridge, A. H.; Jackson, M.
2015-12-01
Displacement of one fluid by another in porous media occurs in various settings including hydrocarbon recovery, CO2 storage and water purification. When the invading fluid is of lower viscosity than the resident fluid, the displacement front is subject to a Saffman-Taylor instability and is unstable to transverse perturbations. These instabilities can grow, leading to fingering of the invading fluid. Numerical simulation of viscous fingering is challenging. The physics is controlled by a complex interplay of viscous and diffusive forces and it is necessary to ensure physical diffusion dominates numerical diffusion to obtain converged solutions. This typically requires the use of high mesh resolution and high order numerical methods. This is computationally expensive. We demonstrate here the use of a novel control volume - finite element (CVFE) method along with dynamic unstructured mesh adaptivity to simulate viscous fingering with higher accuracy and lower computational cost than conventional methods. Our CVFE method employs a discontinuous representation for both pressure and velocity, allowing the use of smaller control volumes (CVs). This yields higher resolution of the saturation field which is represented CV-wise. Moreover, dynamic mesh adaptivity allows high mesh resolution to be employed where it is required to resolve the fingers and lower resolution elsewhere. We use our results to re-examine the existing criteria that have been proposed to govern the onset of instability.Mesh adaptivity requires the mapping of data from one mesh to another. Conventional methods such as consistent interpolation do not readily generalise to discontinuous fields and are non-conservative. We further contribute a general framework for interpolation of CV fields by Galerkin projection. The method is conservative, higher order and yields improved results, particularly with higher order or discontinuous elements where existing approaches are often excessively diffusive.
The measurement of the transmission loss of single leaf walls and panels by an impulse method
NASA Astrophysics Data System (ADS)
Balilah, Y. A.; Gibbs, B. M.
1988-06-01
The standard methods of measurement and rating of sound insulation of panels and walls are generally time-consuming and require expensive and often bulky equipment. In addition, the methods establish only that there has been failure to comply with insulation requirements without indicating the mode of failure. An impulse technique is proposed for the measurement of walls and partitions in situ. The method requires the digital capture of a short duration signal generated by a loudspeaker, and the isolation of the direct component from other reflected and scattered components by time-of-flight methods and windowing. The signal, when transferred from the time to frequency domain by means of fast Fourier transforms, can yield the sound insulation of a partition expressed as a transfer function. Experimental problems in the use of this technique, including those resulting from sphericity of the incident wave front and concentric bending excitation of the partition, are identified and methods proposed for their elimination. Most of the results presented are of single leaf panels subjected to sound at normal incidence, although some measurements were undertaken at oblique incidence. The range of surface densities considered was 7-500 kg/m 2, the highest value corresponding to a brick and plaster wall of thickness 285 mm. Measurement is compared with theoretical prediction, at one-third octave intervals in a frequency range of 100-5000 Hz, or as a continuous function of frequency with a typical resolution of 12·5 Hz. The dynamic range of the measurement equipment sets an upper limit to the measurable transmission loss. For the equipment eventually employed this was represented by a random incidence value of 50 dB.
DOT National Transportation Integrated Search
2013-08-01
During late-night flash (LNF) mode (from late night to early morning hours), traffic signals flash yellow for one road (typically, the major road), requiring caution but no stopping, and flash red for the other road (typically, the minor road), requi...
Station-Keeping Requirements for Astronomical Imaging with Constellations of Free-Flying Collectors
NASA Technical Reports Server (NTRS)
Allen, Ronald J.
2004-01-01
The requirements on station-keeping for constellations of free-flying collectors coupled as (future) imaging arrays in space for astrophysics applications are discussed. The typical knowledge precision required in the plane of the array depends on the angular size of the targets of interest; it is generally at a level of tens of centimeters for typical stellar targets, becoming of order centimeters only for the widest attainable fields of view. In the "piston" direction, perpendicular to the array, the typical knowledge precision required depends on the bandwidth of the signal, and is at a level of tens of wavelengths for narrow approx. 1% signal bands, becoming of order one wavelength only for the broadest bandwidths expected to be useful. The significance of this result is that, at this level of precision, it may be possible to provide the necessary knowledge of array geometry without the use of signal photons, thereby allowing observations of faint targets. "Closure-phase" imaging is a technique which has been very successfully applied to surmount instabilities owing to equipment and to the atmosphere, and which appears to be directly applicable to space imaging arrays where station-keeping drifts play the same role as (slow) atmospheric and equipment instabilities.
Soap-film coating: High-speed deposition of multilayer nanofilms
Zhang, Renyun; Andersson, Henrik A.; Andersson, Mattias; Andres, Britta; Edlund, Håkan; Edström, Per; Edvardsson, Sverker; Forsberg, Sven; Hummelgård, Magnus; Johansson, Niklas; Karlsson, Kristoffer; Nilsson, Hans-Erik; Norgren, Magnus; Olsen, Martin; Uesaka, Tetsu; Öhlund, Thomas; Olin, Håkan
2013-01-01
The coating of thin films is applied in numerous fields and many methods are employed for the deposition of these films. Some coating techniques may deposit films at high speed; for example, ordinary printing paper is coated with micrometre-thick layers of clay at a speed of tens of meters per second. However, to coat nanometre thin films at high speed, vacuum techniques are typically required, which increases the complexity of the process. Here, we report a simple wet chemical method for the high-speed coating of films with thicknesses at the nanometre level. This soap-film coating technique is based on forcing a substrate through a soap film that contains nanomaterials. Molecules and nanomaterials can be deposited at a thickness ranging from less than a monolayer to several layers at speeds up to meters per second. We believe that the soap-film coating method is potentially important for industrial-scale nanotechnology. PMID:23503102
Flexible and stackable terahertz metamaterials via silver-nanoparticle inkjet printing
NASA Astrophysics Data System (ADS)
Kashiwagi, K.; Xie, L.; Li, X.; Kageyama, T.; Miura, M.; Miyashita, H.; Kono, J.; Lee, S.-S.
2018-04-01
There is presently much interest in tunable, flexible, or reconfigurable metamaterial structures that work in the terahertz frequency range. They can be useful for a range of applications, including spectroscopy, sensing, imaging, and communications. Various methods based on microelectromechanical systems have been used for fabricating terahertz metamaterials, but they typically require high-cost facilities and involve a number of time-consuming and intricate processes. Here, we demonstrate a simple, robust, and cost-effective method for fabricating flexible and stackable multiresonant terahertz metamaterials, using silver nanoparticle inkjet printing. Using this method, we designed and fabricated two arrays of split-ring resonators (SRRs) having different resonant frequencies on separate sheets of paper and then combined the two arrays by stacking. Through terahertz time-domain spectroscopy, we observed resonances at the frequencies expected for the individual SRR arrays as well as at a new frequency due to coupling between the two SRR arrays.
A CLEAN-based method for mosaic deconvolution
NASA Astrophysics Data System (ADS)
Gueth, F.; Guilloteau, S.; Viallefond, F.
1995-03-01
Mosaicing may be used in aperture synthesis to map large fields of view. So far, only MEM techniques have been used to deconvolve mosaic images (Cornwell (1988)). A CLEAN-based method has been developed, in which the components are located in a modified expression. This allows a better utilization of the information and consequent noise reduction in the overlapping regions. Simulations show that this method gives correct clean maps and recovers most of the flux of the sources. The introduction of the short-spacing visibilities in the data set is strongly required. Their absence actually introduces artificial lack of structures on the corresponding scale in the mosaic images. The formation of ``stripes'' in clean maps may also occur, but this phenomenon can be significantly reduced by using the Steer-Dewdney-Ito algorithm (Steer, Dewdney & Ito (1984)) to identify the CLEAN components. Typical IRAM interferometer pointing errors do not have a significant effect on the reconstructed images.
TINS, target immobilized NMR screening: an efficient and sensitive method for ligand discovery.
Vanwetswinkel, Sophie; Heetebrij, Robert J; van Duynhoven, John; Hollander, Johan G; Filippov, Dmitri V; Hajduk, Philip J; Siegal, Gregg
2005-02-01
We propose a ligand screening method, called TINS (target immobilized NMR screening), which reduces the amount of target required for the fragment-based approach to drug discovery. Binding is detected by comparing 1D NMR spectra of compound mixtures in the presence of a target immobilized on a solid support to a control sample. The method has been validated by the detection of a variety of ligands for protein and nucleic acid targets (K(D) from 60 to 5000 muM). The ligand binding capacity of a protein was undiminished after 2000 different compounds had been applied, indicating the potential to apply the assay for screening typical fragment libraries. TINS can be used in competition mode, allowing rapid characterization of the ligand binding site. TINS may allow screening of targets that are difficult to produce or that are insoluble, such as membrane proteins.
Grimes, David A
2009-12-01
The term "forgettable contraception" has received less attention in family planning than has "long-acting reversible contraception." Defined here as a method requiring attention no more often than every 3 years, forgettable contraception includes sterilization (female or male), intrauterine devices, and implants. Five principal factors determine contraceptive effectiveness: efficacy, compliance, continuation, fecundity, and the timing of coitus. Of these, compliance and continuation dominate; the key determinants of contraceptive effectiveness are human, not pharmacological. Human nature undermines methods with high theoretical efficacy, such as oral contraceptives and injectable contraceptives. By obviating the need to think about contraception for long intervals, forgettable contraception can help overcome our human fallibility. As a result, all forgettable contraception methods provide first-tier effectiveness (=2 pregnancies per 100 women per year) in typical use. Stated alternatively, the only class of contraceptives today with exclusively first-tier effectiveness is the one that can be started -- and then forgotten for years.
Garcia-Allende, P Beatriz; Mirapeix, Jesus; Conde, Olga M; Cobo, Adolfo; Lopez-Higuera, Jose M
2009-01-01
Plasma optical spectroscopy is widely employed in on-line welding diagnostics. The determination of the plasma electron temperature, which is typically selected as the output monitoring parameter, implies the identification of the atomic emission lines. As a consequence, additional processing stages are required with a direct impact on the real time performance of the technique. The line-to-continuum method is a feasible alternative spectroscopic approach and it is particularly interesting in terms of its computational efficiency. However, the monitoring signal highly depends on the chosen emission line. In this paper, a feature selection methodology is proposed to solve the uncertainty regarding the selection of the optimum spectral band, which allows the employment of the line-to-continuum method for on-line welding diagnostics. Field test results have been conducted to demonstrate the feasibility of the solution.
Identification of gamma-irradiated papaya, melon and watermelon
NASA Astrophysics Data System (ADS)
Marín-Huachaca, Nélida S.; Mancini-Filho, Jorge; Delincée, Henry; Villavicencio, Anna Lúcia C. H.
2004-09-01
Ionizing radiation can be used to control spoilage microorganisms and to increase the shelf life of fresh fruits and vegetables in replacement for the treatment with chemical fumigants. In order to enforce labelling regulations, methods for detecting the irradiation treatment directly in the produce are required. Recently, a number of detection methods for irradiated food have been adopted by the Codex Comission. A rapid screening method for qualitative detection of irradiation is the DNA Comet Assay. The applicability of the DNA Comet Assay for distinguishing irradiated papaya, melon, and watermelon was evaluated. The samples were treated in a 60Co facility at dose levels of 0.0, 0.5, 0.75, and 1.0kGy. The irradiated samples showed typical DNA fragmentation whereas cells from non-irradiated ones appeared intact. In addition to the DNA Comet Assay also the half-embryo test was applied in melon and watermelon to detect the irradiation treatment.
Shallow Reflection Method for Water-Filled Void Detection and Characterization
NASA Astrophysics Data System (ADS)
Zahari, M. N. H.; Madun, A.; Dahlan, S. H.; Joret, A.; Hazreek, Z. A. M.; Mohammad, A. H.; Izzaty, R. A.
2018-04-01
Shallow investigation is crucial in enhancing the characteristics of subsurface void commonly encountered in civil engineering, and one such technique commonly used is seismic-reflection technique. An assessment of the effectiveness of such an approach is critical to determine whether the quality of the works meets the prescribed requirements. Conventional quality testing suffers limitations including: limited coverage (both area and depth) and problems with resolution quality. Traditionally quality assurance measurements use laboratory and in-situ invasive and destructive tests. However geophysical approaches, which are typically non-invasive and non-destructive, offer a method by which improvement of detection can be measured in a cost-effective way. Of this seismic reflection have proved useful to assess void characteristic, this paper evaluates the application of shallow seismic-reflection method in characterizing the water-filled void properties at 0.34 m depth, specifically for detection and characterization of void measurement using 2-dimensional tomography.
Image based method for aberration measurement of lithographic tools
NASA Astrophysics Data System (ADS)
Xu, Shuang; Tao, Bo; Guo, Yongxing; Li, Gongfa
2018-01-01
Information of lens aberration of lithographic tools is important as it directly affects the intensity distribution in the image plane. Zernike polynomials are commonly used for a mathematical description of lens aberrations. Due to the advantage of lower cost and easier implementation of tools, image based measurement techniques have been widely used. Lithographic tools are typically partially coherent systems that can be described by a bilinear model, which entails time consuming calculations and does not lend a simple and intuitive relationship between lens aberrations and the resulted images. Previous methods for retrieving lens aberrations in such partially coherent systems involve through-focus image measurements and time-consuming iterative algorithms. In this work, we propose a method for aberration measurement in lithographic tools, which only requires measuring two images of intensity distribution. Two linear formulations are derived in matrix forms that directly relate the measured images to the unknown Zernike coefficients. Consequently, an efficient non-iterative solution is obtained.
Discussion for possibility of some aerodynamic ground effect craft
NASA Astrophysics Data System (ADS)
Tanabe, Yoshikazu
1990-05-01
Some type of pleasant, convenient, safe, and economical transportation method to supplement airplane transportation is currently required. This paper proposes an Aerodynamic Ground Effect Craft (AGEC) as this new transportation method, and studies its qualitative feasibility in comparison with present typical transportation methods such as transporter airplanes, flying boats, and linear motor cars which also have common characteristics of ultra low altitude cruising. Noteworthy points of AGEC are the effective energy consumption against transportation capacity (exergie) and the ultra low altitude cruising, which is relatively safer at the emergency landing than the subsonic airplane's body landing. Through AGEC has shorter cruising range and smaller transportation capacity, its transportation efficiency is superior to that of airplanes and linear motor cars. There is no critical difficulty in large sizing of AGEC, and AGEC is thought to be the very probable candidate which can supplement airplane transportation in the near future.
Lattice Boltzmann Method for Spacecraft Propellant Slosh Simulation
NASA Technical Reports Server (NTRS)
Orr, Jeb S.; Powers, Joseph F.; Yang, Hong Q
2015-01-01
A scalable computational approach to the simulation of propellant tank sloshing dynamics in microgravity is presented. In this work, we use the lattice Boltzmann equation (LBE) to approximate the behavior of two-phase, single-component isothermal flows at very low Bond numbers. Through the use of a non-ideal gas equation of state and a modified multiple relaxation time (MRT) collision operator, the proposed method can simulate thermodynamically consistent phase transitions at temperatures and density ratios consistent with typical spacecraft cryogenic propellants, for example, liquid oxygen. Determination of the tank forces and moments is based upon a novel approach that relies on the global momentum conservation of the closed fluid domain, and a parametric wall wetting model allows tuning of the free surface contact angle. Development of the interface is implicit and no interface tracking approach is required. A numerical example illustrates the method's application to prediction of bulk fluid behavior during a spacecraft ullage settling maneuver.
Lattice Boltzmann Method for Spacecraft Propellant Slosh Simulation
NASA Technical Reports Server (NTRS)
Orr, Jeb S.; Powers, Joseph F.; Yang, Hong Q.
2015-01-01
A scalable computational approach to the simulation of propellant tank sloshing dynamics in microgravity is presented. In this work, we use the lattice Boltzmann equation (LBE) to approximate the behavior of two-phase, single-component isothermal flows at very low Bond numbers. Through the use of a non-ideal gas equation of state and a modified multiple relaxation time (MRT) collision operator, the proposed method can simulate thermodynamically consistent phase transitions at temperatures and density ratios consistent with typical spacecraft cryogenic propellants, for example, liquid oxygen. Determination of the tank forces and moments relies upon the global momentum conservation of the fluid domain, and a parametric wall wetting model allows tuning of the free surface contact angle. Development of the interface is implicit and no interface tracking approach is required. Numerical examples illustrate the method's application to predicting bulk fluid motion including lateral propellant slosh in low-g conditions.
A Deep Learning Approach to on-Node Sensor Data Analytics for Mobile or Wearable Devices.
Ravi, Daniele; Wong, Charence; Lo, Benny; Yang, Guang-Zhong
2017-01-01
The increasing popularity of wearable devices in recent years means that a diverse range of physiological and functional data can now be captured continuously for applications in sports, wellbeing, and healthcare. This wealth of information requires efficient methods of classification and analysis where deep learning is a promising technique for large-scale data analytics. While deep learning has been successful in implementations that utilize high-performance computing platforms, its use on low-power wearable devices is limited by resource constraints. In this paper, we propose a deep learning methodology, which combines features learned from inertial sensor data together with complementary information from a set of shallow features to enable accurate and real-time activity classification. The design of this combined method aims to overcome some of the limitations present in a typical deep learning framework where on-node computation is required. To optimize the proposed method for real-time on-node computation, spectral domain preprocessing is used before the data are passed onto the deep learning framework. The classification accuracy of our proposed deep learning approach is evaluated against state-of-the-art methods using both laboratory and real world activity datasets. Our results show the validity of the approach on different human activity datasets, outperforming other methods, including the two methods used within our combined pipeline. We also demonstrate that the computation times for the proposed method are consistent with the constraints of real-time on-node processing on smartphones and a wearable sensor platform.
Electromagnetic Model Reliably Predicts Radar Scattering Characteristics of Airborne Organisms
NASA Astrophysics Data System (ADS)
Mirkovic, Djordje; Stepanian, Phillip M.; Kelly, Jeffrey F.; Chilson, Phillip B.
2016-10-01
The radar scattering characteristics of aerial animals are typically obtained from controlled laboratory measurements of a freshly harvested specimen. These measurements are tedious to perform, difficult to replicate, and typically yield only a small subset of the full azimuthal, elevational, and polarimetric radio scattering data. As an alternative, biological applications of radar often assume that the radar cross sections of flying animals are isotropic, since sophisticated computer models are required to estimate the 3D scattering properties of objects having complex shapes. Using the method of moments implemented in the WIPL-D software package, we show for the first time that such electromagnetic modeling techniques (typically applied to man-made objects) can accurately predict organismal radio scattering characteristics from an anatomical model: here the Brazilian free-tailed bat (Tadarida brasiliensis). The simulated scattering properties of the bat agree with controlled measurements and radar observations made during a field study of bats in flight. This numerical technique can produce the full angular set of quantitative polarimetric scattering characteristics, while eliminating many practical difficulties associated with physical measurements. Such a modeling framework can be applied for bird, bat, and insect species, and will help drive a shift in radar biology from a largely qualitative and phenomenological science toward quantitative estimation of animal densities and taxonomic identification.
Electromagnetic Model Reliably Predicts Radar Scattering Characteristics of Airborne Organisms
Mirkovic, Djordje; Stepanian, Phillip M.; Kelly, Jeffrey F.; Chilson, Phillip B.
2016-01-01
The radar scattering characteristics of aerial animals are typically obtained from controlled laboratory measurements of a freshly harvested specimen. These measurements are tedious to perform, difficult to replicate, and typically yield only a small subset of the full azimuthal, elevational, and polarimetric radio scattering data. As an alternative, biological applications of radar often assume that the radar cross sections of flying animals are isotropic, since sophisticated computer models are required to estimate the 3D scattering properties of objects having complex shapes. Using the method of moments implemented in the WIPL-D software package, we show for the first time that such electromagnetic modeling techniques (typically applied to man-made objects) can accurately predict organismal radio scattering characteristics from an anatomical model: here the Brazilian free-tailed bat (Tadarida brasiliensis). The simulated scattering properties of the bat agree with controlled measurements and radar observations made during a field study of bats in flight. This numerical technique can produce the full angular set of quantitative polarimetric scattering characteristics, while eliminating many practical difficulties associated with physical measurements. Such a modeling framework can be applied for bird, bat, and insect species, and will help drive a shift in radar biology from a largely qualitative and phenomenological science toward quantitative estimation of animal densities and taxonomic identification. PMID:27762292
Chan, Ho Yin; Lankevich, Vladimir; Vekilov, Peter G.; Lubchenko, Vassiliy
2012-01-01
Toward quantitative description of protein aggregation, we develop a computationally efficient method to evaluate the potential of mean force between two folded protein molecules that allows for complete sampling of their mutual orientation. Our model is valid at moderate ionic strengths and accounts for the actual charge distribution on the surface of the molecules, the dielectric discontinuity at the protein-solvent interface, and the possibility of protonation or deprotonation of surface residues induced by the electric field due to the other protein molecule. We apply the model to the protein lysozyme, whose solutions exhibit both mesoscopic clusters of protein-rich liquid and liquid-liquid separation; the former requires that protein form complexes with typical lifetimes of approximately milliseconds. We find the electrostatic repulsion is typically lower than the prediction of the Derjaguin-Landau-Verwey-Overbeek theory. The Coulomb interaction in the lowest-energy docking configuration is nonrepulsive, despite the high positive charge on the molecules. Typical docking configurations barely involve protonation or deprotonation of surface residues. The obtained potential of mean force between folded lysozyme molecules is consistent with the location of the liquid-liquid coexistence, but produces dimers that are too short-lived for clusters to exist, suggesting lysozyme undergoes conformational changes during cluster formation. PMID:22768950
Electromagnetic Model Reliably Predicts Radar Scattering Characteristics of Airborne Organisms.
Mirkovic, Djordje; Stepanian, Phillip M; Kelly, Jeffrey F; Chilson, Phillip B
2016-10-20
The radar scattering characteristics of aerial animals are typically obtained from controlled laboratory measurements of a freshly harvested specimen. These measurements are tedious to perform, difficult to replicate, and typically yield only a small subset of the full azimuthal, elevational, and polarimetric radio scattering data. As an alternative, biological applications of radar often assume that the radar cross sections of flying animals are isotropic, since sophisticated computer models are required to estimate the 3D scattering properties of objects having complex shapes. Using the method of moments implemented in the WIPL-D software package, we show for the first time that such electromagnetic modeling techniques (typically applied to man-made objects) can accurately predict organismal radio scattering characteristics from an anatomical model: here the Brazilian free-tailed bat (Tadarida brasiliensis). The simulated scattering properties of the bat agree with controlled measurements and radar observations made during a field study of bats in flight. This numerical technique can produce the full angular set of quantitative polarimetric scattering characteristics, while eliminating many practical difficulties associated with physical measurements. Such a modeling framework can be applied for bird, bat, and insect species, and will help drive a shift in radar biology from a largely qualitative and phenomenological science toward quantitative estimation of animal densities and taxonomic identification.
Universal and idiosyncratic characteristic lengths in bacterial genomes
NASA Astrophysics Data System (ADS)
Junier, Ivan; Frémont, Paul; Rivoire, Olivier
2018-05-01
In condensed matter physics, simplified descriptions are obtained by coarse-graining the features of a system at a certain characteristic length, defined as the typical length beyond which some properties are no longer correlated. From a physics standpoint, in vitro DNA has thus a characteristic length of 300 base pairs (bp), the Kuhn length of the molecule beyond which correlations in its orientations are typically lost. From a biology standpoint, in vivo DNA has a characteristic length of 1000 bp, the typical length of genes. Since bacteria live in very different physico-chemical conditions and since their genomes lack translational invariance, whether larger, universal characteristic lengths exist is a non-trivial question. Here, we examine this problem by leveraging the large number of fully sequenced genomes available in public databases. By analyzing GC content correlations and the evolutionary conservation of gene contexts (synteny) in hundreds of bacterial chromosomes, we conclude that a fundamental characteristic length around 10–20 kb can be defined. This characteristic length reflects elementary structures involved in the coordination of gene expression, which are present all along the genome of nearly all bacteria. Technically, reaching this conclusion required us to implement methods that are insensitive to the presence of large idiosyncratic genomic features, which may co-exist along these fundamental universal structures.
ERIC Educational Resources Information Center
Gissel, Richard L.
2010-01-01
Information system implementations require developers to first know what they must create and then determine how best to create it. The requirements determination phase of the system development life cycle typically determines what functions a system must perform and how well it must accomplish required functions. Implementation success depends on…
Cao, J; Zhang, X; Little, J C; Zhang, Y
2017-03-01
Semivolatile organic compounds (SVOCs) are present in many indoor materials. SVOC emissions can be characterized with a critical parameter, y 0 , the gas-phase SVOC concentration in equilibrium with the source material. To reduce the required time and improve the accuracy of existing methods for measuring y 0 , we developed a new method which uses solid-phase microextraction (SPME) to measure the concentration of an SVOC emitted by source material placed in a sealed chamber. Taking one typical indoor SVOC, di-(2-ethylhexyl) phthalate (DEHP), as the example, the experimental time was shortened from several days (even several months) to about 1 day, with relative errors of less than 5%. The measured y 0 values agree well with the results obtained by independent methods. The saturated gas-phase concentration (y sat ) of DEHP was also measured. Based on the Clausius-Clapeyron equation, a correlation that reveals the effects of temperature, the mass fraction of DEHP in the source material, and y sat on y 0 was established. The proposed method together with the correlation should be useful in estimating and controlling human exposure to indoor DEHP. The applicability of the present approach for other SVOCs and other SVOC source materials requires further study. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Decision curve analysis: a novel method for evaluating prediction models.
Vickers, Andrew J; Elkin, Elena B
2006-01-01
Diagnostic and prognostic models are typically evaluated with measures of accuracy that do not address clinical consequences. Decision-analytic techniques allow assessment of clinical outcomes but often require collection of additional information and may be cumbersome to apply to models that yield a continuous result. The authors sought a method for evaluating and comparing prediction models that incorporates clinical consequences,requires only the data set on which the models are tested,and can be applied to models that have either continuous or dichotomous results. The authors describe decision curve analysis, a simple, novel method of evaluating predictive models. They start by assuming that the threshold probability of a disease or event at which a patient would opt for treatment is informative of how the patient weighs the relative harms of a false-positive and a false-negative prediction. This theoretical relationship is then used to derive the net benefit of the model across different threshold probabilities. Plotting net benefit against threshold probability yields the "decision curve." The authors apply the method to models for the prediction of seminal vesicle invasion in prostate cancer patients. Decision curve analysis identified the range of threshold probabilities in which a model was of value, the magnitude of benefit, and which of several models was optimal. Decision curve analysis is a suitable method for evaluating alternative diagnostic and prognostic strategies that has advantages over other commonly used measures and techniques.
Li, Xiaofang; Parks, Elizabeth J; McLaren, David G; Lambert, Jennifer E; Cardasis, Helene L; Chappell, Derek L; McAvoy, Thomas; Salituro, Gino; Alon, Achilles; Dennie, Justin; Chakravarthy, Manu; Shankar, Sudha S; Laterza, Omar F; Lassman, Michael E
2016-06-01
A traditional oral fatty acid challenge assesses absorption of triacylglycerol (TG) into the periphery through the intestines, but cannot distinguish the composition or source of fatty acid in the TG. Stable isotope-labeled tracers combined with LC-MRM can be used to identify and distinguish TG synthesized with dietary and stored fatty acids. Concentrations of three abundant TGs (52:2, 54:3 and 54:4) were monitored for incorporation of one or two (2)H11-oleate molecules per TG. This method was subjected to routine assay validation and meets typical requirements for an assay to be used to support clinical studies. Calculations for the fractional appearance rate of TG in plasma are presented along with the intracellular enterocyte precursor pool for 12 study participants.
Gene delivery by microfluidic flow-through electroporation based on constant DC and AC field.
Geng, Tao; Zhan, Yihong; Lu, Chang
2012-01-01
Electroporation is one of the most widely used physical methods to deliver exogenous nucleic acids into cells with high efficiency and low toxicity. Conventional electroporation systems typically require expensive pulse generators to provide short electrical pulses at high voltage. In this work, we demonstrate a flow-through electroporation method for continuous transfection of cells based on disposable chips, a syringe pump, and a low-cost power supply that provides a constant voltage. We successfully transfect cells using either DC or AC voltage with high flow rates (ranging from 40 µl/min to 20 ml/min) and high efficiency (up to 75%). We also enable the entire cell membrane to be uniformly permeabilized and dramatically improve gene delivery by inducing complex migrations of cells during the flow.
Systems identification using a modified Newton-Raphson method: A FORTRAN program
NASA Technical Reports Server (NTRS)
Taylor, L. W., Jr.; Iliff, K. W.
1972-01-01
A FORTRAN program is offered which computes a maximum likelihood estimate of the parameters of any linear, constant coefficient, state space model. For the case considered, the maximum likelihood estimate can be identical to that which minimizes simultaneously the weighted mean square difference between the computed and measured response of a system and the weighted square of the difference between the estimated and a priori parameter values. A modified Newton-Raphson or quasilinearization method is used to perform the minimization which typically requires several iterations. A starting technique is used which insures convergence for any initial values of the unknown parameters. The program and its operation are described in sufficient detail to enable the user to apply the program to his particular problem with a minimum of difficulty.
Determination of the spin and recovery characteristics of a typical low-wing general aviation design
NASA Technical Reports Server (NTRS)
Tischler, M. B.; Barlow, J. B.
1980-01-01
The equilibrium spin technique implemented in a graphical form for obtaining spin and recovery characteristics from rotary balance data is outlined. Results of its application to recent rotary balance tests of the NASA Low-Wing General Aviation Aircraft are discussed. The present results, which are an extension of previously published findings, indicate the ability of the equilibrium method to accurately evaluate spin modes and recovery control effectiveness. A comparison of the calculated results with available spin tunnel and full scale findings is presented. The technique is suitable for preliminary design applications as determined from the available results and data base requirements. A full discussion of implementation considerations and a summary of the results obtained from this method to date are presented.
Displacement sensing system and method
VunKannon, Jr., Robert S
2006-08-08
A displacement sensing system and method addresses demanding requirements for high precision sensing of displacement of a shaft, for use typically in a linear electro-dynamic machine, having low failure rates over multi-year unattended operation in hostile environments. Applications include outer space travel by spacecraft having high-temperature, sealed environments without opportunity for servicing over many years of operation. The displacement sensing system uses a three coil sensor configuration, including a reference and sense coils, to provide a pair of ratio-metric signals, which are inputted into a synchronous comparison circuit, which is synchronously processed for a resultant displacement determination. The pair of ratio-metric signals are similarly affected by environmental conditions so that the comparison circuit is able to subtract or nullify environmental conditions that would otherwise cause changes in accuracy to occur.
Syed, Zeeshan; Saeed, Mohammed; Rubinfeld, Ilan
2010-01-01
For many clinical conditions, only a small number of patients experience adverse outcomes. Developing risk stratification algorithms for these conditions typically requires collecting large volumes of data to capture enough positive and negative for training. This process is slow, expensive, and may not be appropriate for new phenomena. In this paper, we explore different anomaly detection approaches to identify high-risk patients as cases that lie in sparse regions of the feature space. We study three broad categories of anomaly detection methods: classification-based, nearest neighbor-based, and clustering-based techniques. When evaluated on data from the National Surgical Quality Improvement Program (NSQIP), these methods were able to successfully identify patients at an elevated risk of mortality and rare morbidities following inpatient surgical procedures. PMID:21347083
Navier-Stokes solution on the CYBER-203 by a pseudospectral technique
NASA Technical Reports Server (NTRS)
Lambiotte, J. J.; Hussaini, M. Y.; Bokhari, S.; Orszag, S. A.
1983-01-01
A three-level, time-split, mixed spectral/finite difference method for the numerical solution of the three-dimensional, compressible Navier-Stokes equations has been developed and implemented on the Control Data Corporation (CDC) CYBER-203. This method uses a spectral representation for the flow variables in the streamwise and spanwise coordinates, and central differences in the normal direction. The five dependent variables are interleaved one horizontal plane at a time and the array of their values at the grid points of each horizontal plane is a typical vector in the computation. The code is organized so as to require, per time step, a single forward-backward pass through the entire data base. The one-and two-dimensional Fast Fourier Transforms are performed using software especially developed for the CYBER-203.
Natural Language Processing Methods and Systems for Biomedical Ontology Learning
Liu, Kaihong; Hogan, William R.; Crowley, Rebecca S.
2010-01-01
While the biomedical informatics community widely acknowledges the utility of domain ontologies, there remain many barriers to their effective use. One important requirement of domain ontologies is that they must achieve a high degree of coverage of the domain concepts and concept relationships. However, the development of these ontologies is typically a manual, time-consuming, and often error-prone process. Limited resources result in missing concepts and relationships as well as difficulty in updating the ontology as knowledge changes. Methodologies developed in the fields of natural language processing, information extraction, information retrieval and machine learning provide techniques for automating the enrichment of an ontology from free-text documents. In this article, we review existing methodologies and developed systems, and discuss how existing methods can benefit the development of biomedical ontologies. PMID:20647054
Trading efficiency for effectiveness in similarity-based indexing for image databases
NASA Astrophysics Data System (ADS)
Barros, Julio E.; French, James C.; Martin, Worthy N.; Kelly, Patrick M.
1995-11-01
Image databases typically manage feature data that can be viewed as points in a feature space. Some features, however, can be better expressed as a collection of points or described by a probability distribution function (PDF) rather than as a single point. In earlier work we introduced a similarity measure and a method for indexing and searching the PDF descriptions of these items that guarantees an answer equivalent to sequential search. Unfortunately, certain properties of the data can restrict the efficiency of that method. In this paper we extend that work and examine trade-offs between efficiency and answer quality or effectiveness. These trade-offs reduce the amount of work required during a search by reducing the number of undesired items fetched without excluding an excessive number of the desired ones.
Numerical solutions of the complete Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Hassan, H. A.
1993-01-01
The objective of this study is to compare the use of assumed pdf (probability density function) approaches for modeling supersonic turbulent reacting flowfields with the more elaborate approach where the pdf evolution equation is solved. Assumed pdf approaches for averaging the chemical source terms require modest increases in CPU time typically of the order of 20 percent above treating the source terms as 'laminar.' However, it is difficult to assume a form for these pdf's a priori that correctly mimics the behavior of the actual pdf governing the flow. Solving the evolution equation for the pdf is a theoretically sound approach, but because of the large dimensionality of this function, its solution requires a Monte Carlo method which is computationally expensive and slow to coverage. Preliminary results show both pdf approaches to yield similar solutions for the mean flow variables.
A synthetic polymer system with repeatable chemical recyclability
NASA Astrophysics Data System (ADS)
Zhu, Jian-Bo; Watson, Eli M.; Tang, Jing; Chen, Eugene Y.-X.
2018-04-01
The development of chemically recyclable polymers offers a solution to the end-of-use issue of polymeric materials and provides a closed-loop approach toward a circular materials economy. However, polymers that can be easily and selectively depolymerized back to monomers typically require low-temperature polymerization methods and also lack physical properties and mechanical strengths required for practical uses. We introduce a polymer system based on γ-butyrolactone (GBL) with a trans-ring fusion at the α and β positions. Such trans-ring fusion renders the commonly considered as nonpolymerizable GBL ring readily polymerizable at room temperature under solvent-free conditions to yield a high–molecular weight polymer. The polymer has enhanced thermostability and can be repeatedly and quantitatively recycled back to its monomer by thermolysis or chemolysis. Mixing of the two enantiomers of the polymer generates a highly crystalline supramolecular stereocomplex.
Liu, Xiaocheng; Zhou, Yaoyu; Zhang, Jiachao; Tang, Lin; Luo, Lin; Zeng, Guangming
2017-06-21
Metal-organic frameworks (MOFs) with Fe content are gradually developing into an independent branch in environmental remediation, requiring economical, effective, low-toxicity strategies to the complete procedure. In this review, recent advancements in the structure, synthesis, and environmental application focusing on the mechanism are presented. The unique structure of novel design proposed specific characteristics of different iron-containing MOFs with potential innovation. Synthesis of typical MILs, NH 2 -MILs and MILs based materials reveal the basis and defect of the current method, indicating the optimal means for the actual requirements. The adsorption of various contamination with multiple interaction as well as the catalytic degradation over radicals or electron-hole pairs are reviewed. This review implied considerable prospects of iron-containing MOFs in the field of environment and a more comprehensive cognition into the challenges and potential improvement.
African Primary Care Research: Writing a research report
Mash, Bob
2014-01-01
Abstract Presenting a research report is an important way of demonstrating one's ability to conduct research and is a requirement of most research-based degrees. Although known by various names across academic institutions, the structure required is mostly very similar, being based on the Introduction, Methods, Results, Discussion format of scientific articles. This article offers some guidance on the process of writing, aimed at helping readers to start and to continue their writing; and to assist them in presenting a report that is received positively by their readers, including examiners. It also details the typical components of the research report, providing some guidelines for each, as well as the pitfalls to avoid. This article is part of a series on African Primary Care Research that aims to build capacity for research particularly at a Master's level. PMID:26245441
Design of a steganographic virtual operating system
NASA Astrophysics Data System (ADS)
Ashendorf, Elan; Craver, Scott
2015-03-01
A steganographic file system is a secure file system whose very existence on a disk is concealed. Customarily, these systems hide an encrypted volume within unused disk blocks, slack space, or atop conventional encrypted volumes. These file systems are far from undetectable, however: aside from their ciphertext footprint, they require a software or driver installation whose presence can attract attention and then targeted surveillance. We describe a new steganographic operating environment that requires no visible software installation, launching instead from a concealed bootstrap program that can be extracted and invoked with a chain of common Unix commands. Our system conceals its payload within innocuous files that typically contain high-entropy data, producing a footprint that is far less conspicuous than existing methods. The system uses a local web server to provide a file system, user interface and applications through a web architecture.
HGML: a hypertext guideline markup language.
Hagerty, C. G.; Pickens, D.; Kulikowski, C.; Sonnenberg, F.
2000-01-01
Existing text-based clinical practice guidelines can be difficult to put into practice. While a growing number of such documents have gained acceptance in the medical community and contain a wealth of valuable information, the time required to digest them is substantial. Yet the expressive power, subtlety and flexibility of natural language pose challenges when designing computer tools that will help in their application. At the same time, formal computer languages typically lack such expressiveness and the effort required to translate existing documents into these languages may be costly. We propose a method based on the mark-up concept for converting text-based clinical guidelines into a machine-operable form. This allows existing guidelines to be manipulated by machine, and viewed in different formats at various levels of detail according to the needs of the practitioner, while preserving their originally published form. PMID:11079898
NASA Astrophysics Data System (ADS)
Gliss, Jonas; Stebel, Kerstin; Kylling, Arve; Solvejg Dinger, Anna; Sihler, Holger; Sudbø, Aasmund
2017-04-01
UV SO2 cameras have become a common method for monitoring SO2 emission rates from volcanoes. Scattered solar UV radiation is measured in two wavelength windows, typically around 310 nm and 330 nm (distinct / weak SO2 absorption) using interference filters. The data analysis comprises the retrieval of plume background intensities (to calculate plume optical densities), the camera calibration (to convert optical densities into SO2 column densities) and the retrieval of gas velocities within the plume as well as the retrieval of plume distances. SO2 emission rates are then typically retrieved along a projected plume cross section, for instance a straight line perpendicular to the plume propagation direction. Today, for most of the required analysis steps, several alternatives exist due to ongoing developments and improvements related to the measurement technique. We present piscope, a cross platform, open source software toolbox for the analysis of UV SO2 camera data. The code is written in the Python programming language and emerged from the idea of a common analysis platform incorporating a selection of the most prevalent methods found in literature. piscope includes several routines for plume background retrievals, routines for cell and DOAS based camera calibration including two individual methods to identify the DOAS field of view (shape and position) within the camera images. Gas velocities can be retrieved either based on an optical flow analysis or using signal cross correlation. A correction for signal dilution (due to atmospheric scattering) can be performed based on topographic features in the images. The latter requires distance retrievals to the topographic features used for the correction. These distances can be retrieved automatically on a pixel base using intersections of individual pixel viewing directions with the local topography. The main features of piscope are presented based on dataset recorded at Mt. Etna, Italy in September 2015.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Hara, Matthew J.; Addleman, R. Shane
Radioactive contamination in the environment, be it from accidental or intentional release, can create an urgent need to assess water and food supplies, the environment, and monitor human health. Alpha-emitting radionuclides represent the most ionizing, and therefore the most damaging, form of radiation when internalized. Additionally, because of its ease of energy attenuation in solids or liquids, alpha emissions cannot be reliably monitored using non-destructive means. In the event of such an emergency, rapid and efficient methods will be needed to screen scores of samples (food, water, and human excreta) within a short time window. Unfortunately, the assay of alpha-emittingmore » radionuclides using traditional radioanalytical methods is typically labor intensive and time consuming. The creation of analytical counting sources typically requires a series of chemical treatment steps to achieve well performing counting sources. In an effort to devise radioanalytical methods that are fast, require little labor, and minimize the use of toxic or corrosive agents, researchers at PNNL have evaluated magnetite (Fe3O4) nanoparticles as extracting agents for alpha-emitting radionuclides from chemically unmodified aqueous systems. It is demonstrated that bare magnetic nanoparticles exhibit high affinity for representative α-emitting radionuclides (241Am and 210Po) from representative aqueous matrices: river and ground water. Furthermore, use of the magnetic properties of these materials to concentrate the sorbed analyte from the bulk aqueous solution has been demonstrated. The nanoparticle concentrate can be either directly dispensed into scintillation cocktail, or first dissolved and then added to scintillation cocktail as a solution for alpha emission assay by liquid scintillation analysis. Despite the extreme quench caused by the metal oxide suspensions, the authors have demonstrated that quench correction features available on modern liquid scintillation analyzers can be employed to assure that quench-induced analytical biases can be avoided.« less
NASA Astrophysics Data System (ADS)
Guyot, A.; Ostergaard, K.; Lenkopane, M.; Fan, J.; Lockington, D. A.
2011-12-01
Estimating whole-plant water use in trees requires reliable and accurate methods. Measuring sap velocity and extrapolating to tree water use is seen as the most commonly used. However, deducing the tree water use from sap velocity requires an estimate of the sapwood area. This estimate is the highest cause of uncertainty, and can reach more than 50 % of the uncertainty in the estimate of water use per day. Here, we investigate the possibility of using Electrical Resistivity Tomography to evaluate the sapwood area distribution in a plantation of Pinus elliottii. Electric resistivity tomographs of Pinus elliottii show a very typical pattern of electrical resistivity, which is highly correlated to sapwood and heartwood distribution. To identify the key factors controlling the variation of electrical resistivity, cross sections at breast height for ten trees have been monitored with electrical resistivity tomography. Trees have been cut down after the experiment to identify the heartwood/sapwood boundaries and to extract wood and sap samples. pH, electrolyte concentration and wood moisture content have then been analysed for these samples. Results show that the heartwood/sapwood patterns are highly correlated with electrical resistivity, and that the wood moisture content is the most influencing factor controlling the variability of the patterns. These results show that electric resistivity tomography could be used as a powerful tool to identify the sapwood area, and thus be used in combination with sapflow sensors to map tree water use at stand scale. However, if Pinus elliottii shows typical patterns, further work is needed to identify to see if there are species - specific characterictics as shown in previous works (
Ayyub, Bilal M
2014-02-01
The United Nations Office for Disaster Risk Reduction reported that the 2011 natural disasters, including the earthquake and tsunami that struck Japan, resulted in $366 billion in direct damages and 29,782 fatalities worldwide. Storms and floods accounted for up to 70% of the 302 natural disasters worldwide in 2011, with earthquakes producing the greatest number of fatalities. Average annual losses in the United States amount to about $55 billion. Enhancing community and system resilience could lead to massive savings through risk reduction and expeditious recovery. The rational management of such reduction and recovery is facilitated by an appropriate definition of resilience and associated metrics. In this article, a resilience definition is provided that meets a set of requirements with clear relationships to the metrics of the relevant abstract notions of reliability and risk. Those metrics also meet logically consistent requirements drawn from measure theory, and provide a sound basis for the development of effective decision-making tools for multihazard environments. Improving the resiliency of a system to meet target levels requires the examination of system enhancement alternatives in economic terms, within a decision-making framework. Relevant decision analysis methods would typically require the examination of resilience based on its valuation by society at large. The article provides methods for valuation and benefit-cost analysis based on concepts from risk analysis and management. © 2013 Society for Risk Analysis.
18 CFR 707.8 - Typical classes of action requiring similar treatment under NEPA.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 18 Conservation of Power and Water Resources 2 2013-04-01 2012-04-01 true Typical classes of... Resources WATER RESOURCES COUNCIL COMPLIANCE WITH THE NATIONAL ENVIRONMENTAL POLICY ACT (NEPA) Water... submittal of regional water resources management plans (comprehensive, coordinated, joint plans or elements...
18 CFR 707.8 - Typical classes of action requiring similar treatment under NEPA.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 18 Conservation of Power and Water Resources 2 2012-04-01 2012-04-01 false Typical classes of... Resources WATER RESOURCES COUNCIL COMPLIANCE WITH THE NATIONAL ENVIRONMENTAL POLICY ACT (NEPA) Water... submittal of regional water resources management plans (comprehensive, coordinated, joint plans or elements...
18 CFR 707.8 - Typical classes of action requiring similar treatment under NEPA.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 18 Conservation of Power and Water Resources 2 2014-04-01 2014-04-01 false Typical classes of... Resources WATER RESOURCES COUNCIL COMPLIANCE WITH THE NATIONAL ENVIRONMENTAL POLICY ACT (NEPA) Water... submittal of regional water resources management plans (comprehensive, coordinated, joint plans or elements...
18 CFR 707.8 - Typical classes of action requiring similar treatment under NEPA.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 18 Conservation of Power and Water Resources 2 2011-04-01 2011-04-01 false Typical classes of... Resources WATER RESOURCES COUNCIL COMPLIANCE WITH THE NATIONAL ENVIRONMENTAL POLICY ACT (NEPA) Water... submittal of regional water resources management plans (comprehensive, coordinated, joint plans or elements...
18 CFR 707.8 - Typical classes of action requiring similar treatment under NEPA.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Typical classes of... Resources WATER RESOURCES COUNCIL COMPLIANCE WITH THE NATIONAL ENVIRONMENTAL POLICY ACT (NEPA) Water... submittal of regional water resources management plans (comprehensive, coordinated, joint plans or elements...
This image, looking south, shows a typical corridor in the ...
This image, looking south, shows a typical corridor in the laboratory area of the building, where numerous pipes were required to carry the various utilities needed for procedure and safety equipment - Department of Energy, Mound Facility, Electronics Laboratory Building (E Building), One Mound Road, Miamisburg, Montgomery County, OH
Preconditioned conjugate gradient wave-front reconstructors for multiconjugate adaptive optics.
Gilles, Luc; Ellerbroek, Brent L; Vogel, Curtis R
2003-09-10
Multiconjugate adaptive optics (MCAO) systems with 10(4)-10(5) degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wavefront control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of adaptive optics degrees of freedom. We develop scalable open-loop iterative sparse matrix implementations of minimum variance wave-front reconstruction for telescope diameters up to 32 m with more than 10(4) actuators. The basic approach is the preconditioned conjugate gradient method with an efficient preconditioner, whose block structure is defined by the atmospheric turbulent layers very much like the layer-oriented MCAO algorithms of current interest. Two cost-effective preconditioners are investigated: a multigrid solver and a simpler block symmetric Gauss-Seidel (BSGS) sweep. Both options require off-line sparse Cholesky factorizations of the diagonal blocks of the matrix system. The cost to precompute these factors scales approximately as the three-halves power of the number of estimated phase grid points per atmospheric layer, and their average update rate is typically of the order of 10(-2) Hz, i.e., 4-5 orders of magnitude lower than the typical 10(3) Hz temporal sampling rate. All other computations scale almost linearly with the total number of estimated phase grid points. We present numerical simulation results to illustrate algorithm convergence. Convergence rates of both preconditioners are similar, regardless of measurement noise level, indicating that the layer-oriented BSGS sweep is as effective as the more elaborated multiresolution preconditioner.
MOCCS: Clarifying DNA-binding motif ambiguity using ChIP-Seq data.
Ozaki, Haruka; Iwasaki, Wataru
2016-08-01
As a key mechanism of gene regulation, transcription factors (TFs) bind to DNA by recognizing specific short sequence patterns that are called DNA-binding motifs. A single TF can accept ambiguity within its DNA-binding motifs, which comprise both canonical (typical) and non-canonical motifs. Clarification of such DNA-binding motif ambiguity is crucial for revealing gene regulatory networks and evaluating mutations in cis-regulatory elements. Although chromatin immunoprecipitation sequencing (ChIP-seq) now provides abundant data on the genomic sequences to which a given TF binds, existing motif discovery methods are unable to directly answer whether a given TF can bind to a specific DNA-binding motif. Here, we report a method for clarifying the DNA-binding motif ambiguity, MOCCS. Given ChIP-Seq data of any TF, MOCCS comprehensively analyzes and describes every k-mer to which that TF binds. Analysis of simulated datasets revealed that MOCCS is applicable to various ChIP-Seq datasets, requiring only a few minutes per dataset. Application to the ENCODE ChIP-Seq datasets proved that MOCCS directly evaluates whether a given TF binds to each DNA-binding motif, even if known position weight matrix models do not provide sufficient information on DNA-binding motif ambiguity. Furthermore, users are not required to provide numerous parameters or background genomic sequence models that are typically unavailable. MOCCS is implemented in Perl and R and is freely available via https://github.com/yuifu/moccs. By complementing existing motif-discovery software, MOCCS will contribute to the basic understanding of how the genome controls diverse cellular processes via DNA-protein interactions. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gottwald, Georg A.; Wormell, J. P.; Wouters, Jeroen
2016-09-01
Using a sensitive statistical test we determine whether or not one can detect the breakdown of linear response given observations of deterministic dynamical systems. A goodness-of-fit statistics is developed for a linear statistical model of the observations, based on results for central limit theorems for deterministic dynamical systems, and used to detect linear response breakdown. We apply the method to discrete maps which do not obey linear response and show that the successful detection of breakdown depends on the length of the time series, the magnitude of the perturbation and on the choice of the observable. We find that in order to reliably reject the assumption of linear response for typical observables sufficiently large data sets are needed. Even for simple systems such as the logistic map, one needs of the order of 106 observations to reliably detect the breakdown with a confidence level of 95 %; if less observations are available one may be falsely led to conclude that linear response theory is valid. The amount of data required is larger the smaller the applied perturbation. For judiciously chosen observables the necessary amount of data can be drastically reduced, but requires detailed a priori knowledge about the invariant measure which is typically not available for complex dynamical systems. Furthermore we explore the use of the fluctuation-dissipation theorem (FDT) in cases with limited data length or coarse-graining of observations. The FDT, if applied naively to a system without linear response, is shown to be very sensitive to the details of the sampling method, resulting in erroneous predictions of the response.
Low-dimensional, morphologically accurate models of subthreshold membrane potential
Kellems, Anthony R.; Roos, Derrick; Xiao, Nan; Cox, Steven J.
2009-01-01
The accurate simulation of a neuron’s ability to integrate distributed synaptic input typically requires the simultaneous solution of tens of thousands of ordinary differential equations. For, in order to understand how a cell distinguishes between input patterns we apparently need a model that is biophysically accurate down to the space scale of a single spine, i.e., 1 μm. We argue here that one can retain this highly detailed input structure while dramatically reducing the overall system dimension if one is content to accurately reproduce the associated membrane potential at a small number of places, e.g., at the site of action potential initiation, under subthreshold stimulation. The latter hypothesis permits us to approximate the active cell model with an associated quasi-active model, which in turn we reduce by both time-domain (Balanced Truncation) and frequency-domain (ℋ2 approximation of the transfer function) methods. We apply and contrast these methods on a suite of typical cells, achieving up to four orders of magnitude in dimension reduction and an associated speed-up in the simulation of dendritic democratization and resonance. We also append a threshold mechanism and indicate that this reduction has the potential to deliver an accurate quasi-integrate and fire model. PMID:19172386
NASA Astrophysics Data System (ADS)
Stewart, James M. P.; Ansell, Steve; Lindsay, Patricia E.; Jaffray, David A.
2015-12-01
Advances in precision microirradiators for small animal radiation oncology studies have provided the framework for novel translational radiobiological studies. Such systems target radiation fields at the scale required for small animal investigations, typically through a combination of on-board computed tomography image guidance and fixed, interchangeable collimators. Robust targeting accuracy of these radiation fields remains challenging, particularly at the millimetre scale field sizes achievable by the majority of microirradiators. Consistent and reproducible targeting accuracy is further hindered as collimators are removed and inserted during a typical experimental workflow. This investigation quantified this targeting uncertainty and developed an online method based on a virtual treatment isocenter to actively ensure high performance targeting accuracy for all radiation field sizes. The results indicated that the two-dimensional field placement uncertainty was as high as 1.16 mm at isocenter, with simulations suggesting this error could be reduced to 0.20 mm using the online correction method. End-to-end targeting analysis of a ball bearing target on radiochromic film sections showed an improved targeting accuracy with the three-dimensional vector targeting error across six different collimators reduced from 0.56+/- 0.05 mm (mean ± SD) to 0.05+/- 0.05 mm for an isotropic imaging voxel size of 0.1 mm.
System analysis of vehicle active safety problem
NASA Astrophysics Data System (ADS)
Buznikov, S. E.
2018-02-01
The problem of the road transport safety affects the vital interests of the most of the population and is characterized by a global level of significance. The system analysis of problem of creation of competitive active vehicle safety systems is presented as an interrelated complex of tasks of multi-criterion optimization and dynamic stabilization of the state variables of a controlled object. Solving them requires generation of all possible variants of technical solutions within the software and hardware domains and synthesis of the control, which is close to optimum. For implementing the task of the system analysis the Zwicky “morphological box” method is used. Creation of comprehensive active safety systems involves solution of the problem of preventing typical collisions. For solving it, a structured set of collisions is introduced with its elements being generated also using the Zwicky “morphological box” method. The obstacle speed, the longitudinal acceleration of the controlled object and the unpredictable changes in its movement direction due to certain faults, the road surface condition and the control errors are taken as structure variables that characterize the conditions of collisions. The conditions for preventing typical collisions are presented as inequalities for physical variables that define the state vector of the object and its dynamic limits.
QUICR-learning for Multi-Agent Coordination
NASA Technical Reports Server (NTRS)
Agogino, Adrian K.; Tumer, Kagan
2006-01-01
Coordinating multiple agents that need to perform a sequence of actions to maximize a system level reward requires solving two distinct credit assignment problems. First, credit must be assigned for an action taken at time step t that results in a reward at time step t > t. Second, credit must be assigned for the contribution of agent i to the overall system performance. The first credit assignment problem is typically addressed with temporal difference methods such as Q-learning. The second credit assignment problem is typically addressed by creating custom reward functions. To address both credit assignment problems simultaneously, we propose the "Q Updates with Immediate Counterfactual Rewards-learning" (QUICR-learning) designed to improve both the convergence properties and performance of Q-learning in large multi-agent problems. QUICR-learning is based on previous work on single-time-step counterfactual rewards described by the collectives framework. Results on a traffic congestion problem shows that QUICR-learning is significantly better than a Q-learner using collectives-based (single-time-step counterfactual) rewards. In addition QUICR-learning provides significant gains over conventional and local Q-learning. Additional results on a multi-agent grid-world problem show that the improvements due to QUICR-learning are not domain specific and can provide up to a ten fold increase in performance over existing methods.
How much medicine do spine surgeons need to know to better select and care for patients?
Epstein, Nancy E.
2012-01-01
Background: Although we routinely utilize medical consultants for preoperative clearance and postoperative patient follow-up, we as spine surgeons need to know more medicine to better select and care for our patients. Methods: This study provides additional medical knowledge to facilitate surgeons’ “cross-talk” with medical colleagues who are concerned about how multiple comorbid risk factors affect their preoperative clearance, and impact patients’ postoperative outcomes. Results: Within 6 months of an acute myocardial infarction (MI), patients undergoing urological surgery encountered a 40% mortality rate: similar rates may likely apply to patients undergoing spinal surgery. Within 6 weeks to 2 months of placing uncoated cardiac, carotid, or other stents, endothelialization is typically complete; as anti-platelet therapy may often be discontinued, spinal surgery can then be more safely performed. Coated stents, however, usually require 6 months to 1 year for endothelialization to occur; thus spinal surgery is often delayed as anti-platelet therapy must typically be continued to avoid thrombotic complications (e.g., stroke/MI). Diabetes and morbid obesity both increase the risk of postoperative infection, and poor wound healing, while the latter increases the risk of phlebitis/pulmonary embolism. Both hypercoagluation and hypocoagulation syndromes may require special preoperative testing/medications and/or transfusions of specific hematological factors. Pulmonary disease, neurological disorders, and major psychiatric pathology may also require further evaluations/therapy, and may even preclude successful surgical intervention. Conclusions: Although we as spinal surgeons utilize medical consultants for preoperative clearance and postoperative care, we need to know more medicine to better select and care for our patients. PMID:23248752
NASA Technical Reports Server (NTRS)
Cornelius, Michael; Smartt, Ziba; Henrie, Vaughn; Johnson, Mont
2003-01-01
The recent developments in Fabry-Perot fiber optic instruments have resulted in accurate transducers with some of the physical characteristics required for use in obtaining internal data from solid rocket motors. These characteristics include small size, non-electrical excitation, and immunity to electro-magnetic interference. These transducers have not been previously utilized in this environment due to the high temperatures typically encountered. A series of tests were conducted using a 1 1-Inch Hybrid test bed to develop installation techniques that will allow the fiber optic instruments to survive and obtain data for a short period of time following the motor ignition. The installation methods developed during this test series have the potential to allow data to be acquired in the motor chamber, propellant bore, and nozzle during the ignition transient. These measurements would prove to be very useful in the characterization of current motor designs and provide insight into the requirements for further refinements. The process of developing these protective methods and the installation techniques used to apply them is summarized.
NASA Technical Reports Server (NTRS)
Hozman, Aron D.; Hughes, William O.
2014-01-01
The exposure of a customer's aerospace test-article to a simulated acoustic launch environment is typically performed in a reverberant acoustic test chamber. The acoustic pre-test runs that will ensure that the sound pressure levels of this environment can indeed be met by a test facility are normally performed without a test-article dynamic simulator of representative acoustic absorption and size. If an acoustic test facility's available acoustic power capability becomes maximized with the test-article installed during the actual test then the customer's environment requirement may become compromised. In order to understand the risk of not achieving the customer's in-tolerance spectrum requirement with the test-article installed, an acoustic power margin evaluation as a function of frequency may be performed by the test facility. The method for this evaluation of acoustic power will be discussed in this paper. This method was recently applied at the NASA Glenn Research Center Plum Brook Station's Reverberant Acoustic Test Facility for the SpaceX Falcon 9 Payload Fairing acoustic test program.
NASA Technical Reports Server (NTRS)
Hozman, Aron D.; Hughes, William O.
2014-01-01
The exposure of a customers aerospace test-article to a simulated acoustic launch environment is typically performed in a reverberant acoustic test chamber. The acoustic pre-test runs that will ensure that the sound pressure levels of this environment can indeed be met by a test facility are normally performed without a test-article dynamic simulator of representative acoustic absorption and size. If an acoustic test facilitys available acoustic power capability becomes maximized with the test-article installed during the actual test then the customers environment requirement may become compromised. In order to understand the risk of not achieving the customers in-tolerance spectrum requirement with the test-article installed, an acoustic power margin evaluation as a function of frequency may be performed by the test facility. The method for this evaluation of acoustic power will be discussed in this paper. This method was recently applied at the NASA Glenn Research Center Plum Brook Stations Reverberant Acoustic Test Facility for the SpaceX Falcon 9 Payload Fairing acoustic test program.
Analyzing neural responses with vector fields.
Buneo, Christopher A
2011-04-15
Analyzing changes in the shape and scale of single cell response fields is a key component of many neurophysiological studies. Typical analyses of shape change involve correlating firing rates between experimental conditions or "cross-correlating" single cell tuning curves by shifting them with respect to one another and correlating the overlapping data. Such shifting results in a loss of data, making interpretation of the resulting correlation coefficients problematic. The problem is particularly acute for two dimensional response fields, which require shifting along two axes. Here, an alternative method for quantifying response field shape and scale based on correlation of vector field representations is introduced. The merits and limitations of the methods are illustrated using both simulated and experimental data. It is shown that vector correlation provides more information on response field changes than scalar correlation without requiring field shifting and concomitant data loss. An extension of this vector field approach is also demonstrated which can be used to identify the manner in which experimental variables are encoded in studies of neural reference frames. Copyright © 2011 Elsevier B.V. All rights reserved.
Processes involved in the development of latent fingerprints using the cyanoacrylate fuming method.
Lewis, L A; Smithwick, R W; Devault, G L; Bolinger, B; Lewis, S A
2001-03-01
Chemical processes involved in the development of latent fingerprints using the cyanoacrylate fuming method have been studied. Two major types of latent prints have been investigated-clean and oily prints. Scanning electron microscopy (SEM) has been used as a tool for determining the morphology of the polymer developed separately on clean and oily prints after cyanoacrylate fuming. A correlation between the chemical composition of an aged latent fingerprint, prior to development, and the quality of a developed fingerprint has been observed in the morphology. The moisture in the print prior to fuming has been found to be more important than the moisture in the air during fuming for the development of a useful latent print. In addition, the amount of time required to develop a high quality latent print has been found to be within 2 min. The cyanoacrylate polymerization process is extremely rapid. When heat is used to accelerate the fuming process, typically a period of 2 min is required to develop the print. The optimum development time depends upon the concentration of cyanoacrylate vapors within the enclosure.
The risks associated with falling parts of glazed facades in case of fire
NASA Astrophysics Data System (ADS)
Sędłak, Bartłomiej; Kinowski, Jacek; Sulik, Paweł; Kimbar, Grzegorz
2018-05-01
Arguably, one of the most important requirement a building have to meet in case of fire is to ensure the safe evacuation of its users and the work of rescue teams. Consequently, issues related to the risks associated with falling parts of facades are fairly well known around Europe. Even though not equally well defined as other fire safety requirements concerning glazed facades, there is plenty of test methods for assessment of facades regarding falling parts, mostly based on an approach related to fire spread. In this paper selection of test method for assessment of facades regarding falling parts is briefly presented. However, focus of this work is on fire test of typical glazed segment of façade performed in ITB Laboratory. Results of the test positively verifies conjecture that solutions with glass units configured with thin, tempered glass panes on the outer side should pose no threat. However, the question has been raised whether the behaviour of other glass unit solutions (with additional coatings or laminated) would be similar.
Feature-based pairwise retinal image registration by radial distortion correction
NASA Astrophysics Data System (ADS)
Lee, Sangyeol; Abràmoff, Michael D.; Reinhardt, Joseph M.
2007-03-01
Fundus camera imaging is widely used to document disorders such as diabetic retinopathy and macular degeneration. Multiple retinal images can be combined together through a procedure known as mosaicing to form an image with a larger field of view. Mosaicing typically requires multiple pairwise registrations of partially overlapped images. We describe a new method for pairwise retinal image registration. The proposed method is unique in that the radial distortion due to image acquisition is corrected prior to the geometric transformation. Vessel lines are detected using the Hessian operator and are used as input features to the registration. Since the overlapping region is typically small in a retinal image pair, only a few correspondences are available, thus limiting the applicable model to an afine transform at best. To recover the distortion due to curved-surface of retina and lens optics, a combined approach of an afine model with a radial distortion correction is proposed. The parameters of the image acquisition and radial distortion models are estimated during an optimization step that uses Powell's method driven by the vessel line distance. Experimental results using 20 pairs of green channel images acquired from three subjects with a fundus camera confirmed that the afine model with distortion correction could register retinal image pairs to within 1.88+/-0.35 pixels accuracy (mean +/- standard deviation) assessed by vessel line error, which is 17% better than the afine-only approach. Because the proposed method needs only two correspondences, it can be applied to obtain good registration accuracy even in the case of small overlap between retinal image pairs.
Zhang, Hongshen; Chen, Ming
2013-11-01
In-depth studies on the recycling of typical automotive exterior plastic parts are significant and beneficial for environmental protection, energy conservation, and sustainable development of China. In the current study, several methods were used to analyze the recycling industry model for typical exterior parts of passenger vehicles in China. The strengths, weaknesses, opportunities, and challenges of the current recycling industry for typical exterior parts of passenger vehicles were analyzed comprehensively based on the SWOT method. The internal factor evaluation matrix and external factor evaluation matrix were used to evaluate the internal and external factors of the recycling industry. The recycling industry was found to respond well to all the factors and it was found to face good developing opportunities. Then, the cross-link strategies analysis for the typical exterior parts of the passenger car industry of China was conducted based on the SWOT analysis strategies and established SWOT matrix. Finally, based on the aforementioned research, the recycling industry model led by automobile manufacturers was promoted. Copyright © 2013 Elsevier Ltd. All rights reserved.
Camerini, Serena; Montepeloso, Emanuela; Casella, Marialuisa; Crescenzi, Marco; Marianella, Rosa Maria; Fuselli, Fabio
2016-04-15
Ricotta cheese is a typical Italian product, made with whey from various species, including cow, buffalo, sheep, and goat. Ricotta cheese nominally manufactured from the last three species may be fraudulently produced using the comparatively cheaper cow whey. Exposing such food frauds requires a reliable analytical method. Despite the extensive similarities shared by whey proteins of the four species, a mass spectrometry-based analytical method was developed that exploits three species-specific peptides derived from β-lactoglobulin and α-lactalbumin. This method can detect as little as 0.5% bovine whey in ricotta cheese from the other three species. Furthermore, a tight correlation was found (R(2)>0.99) between cow whey percentages and mass spectrometry measurements throughout the 1-50% range. Thus, this method can be used for forensic detection of ricotta cheese adulteration and, if properly validated, to provide quantitative evaluations. Copyright © 2015 Elsevier Ltd. All rights reserved.