Project W-314 specific test and evaluation plan for transfer line SN-633 (241-AX-B to 241-AY-02A)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hays, W.H.
1998-03-20
The purpose of this Specific Test and Evaluation Plan (STEP) is to provide a detailed written plan for the systematic testing of modifications made by the addition of the SN-633 transfer line by the W-314 Project. The STEP develops the outline for test procedures that verify the system`s performance to the established Project design criteria. The STEP is a lower tier document based on the W-314 Test and Evaluation Plan (TEP). This STEP encompasses all testing activities required to demonstrate compliance to the project design criteria as it relates to the addition of transfer line SN-633. The Project Design Specificationsmore » (PDS) identify the specific testing activities required for the Project. Testing includes Validations and Verifications (e.g., Commercial Grade Item Dedication activities), Factory Acceptance Tests (FATs), installation tests and inspections, Construction Acceptance Tests (CATs), Acceptance Test Procedures (ATPs), Pre-Operational Test Procedures (POTPs), and Operational Test Procedures (OTPs). It should be noted that POTPs are not required for testing of the transfer line addition. The STEP will be utilized in conjunction with the TEP for verification and validation.« less
46 CFR 72.01-25 - Additional structural requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... CONSTRUCTION AND ARRANGEMENT Hull Structure § 72.01-25 Additional structural requirements. (a) Vessels required.... The construction of the bulkheads shall be to the satisfaction of the Commandant. (2) Steps and... deck, such bulkhead or deck shall be made structurally watertight without the use of wood, cement, or...
Code of Federal Regulations, 2010 CFR
2010-04-01
... members are required to apportion the amount of the additional tax using the proportionate method... proportionate method, the additional tax is allocated to each component member in the same proportion as the... steps for applying the proportionate method of allocation are as follows: (1) Step 1. The regular tax...
Monteiro, Kristina A; George, Paul; Dollase, Richard; Dumenco, Luba
2017-01-01
The use of multiple academic indicators to identify students at risk of experiencing difficulty completing licensure requirements provides an opportunity to increase support services prior to high-stakes licensure examinations, including the United States Medical Licensure Examination (USMLE) Step 2 clinical knowledge (CK). Step 2 CK is becoming increasingly important in decision-making by residency directors because of increasing undergraduate medical enrollment and limited available residency vacancies. We created and validated a regression equation to predict students' Step 2 CK scores from previous academic indicators to identify students at risk, with sufficient time to intervene with additional support services as necessary. Data from three cohorts of students (N=218) with preclinical mean course exam score, National Board of Medical Examination subject examinations, and USMLE Step 1 and Step 2 CK between 2011 and 2013 were used in analyses. The authors created models capable of predicting Step 2 CK scores from academic indicators to identify at-risk students. In model 1, preclinical mean course exam score and Step 1 score accounted for 56% of the variance in Step 2 CK score. The second series of models included mean preclinical course exam score, Step 1 score, and scores on three NBME subject exams, and accounted for 67%-69% of the variance in Step 2 CK score. The authors validated the findings on the most recent cohort of graduating students (N=89) and predicted Step 2 CK score within a mean of four points (SD=8). The authors suggest using the first model as a needs assessment to gauge the level of future support required after completion of preclinical course requirements, and rescreening after three of six clerkships to identify students who might benefit from additional support before taking USMLE Step 2 CK.
Connecticut Department of Transportation safety techniques enhancement plan.
DOT National Transportation Integrated Search
2015-03-15
The Highway Safety Manual (HSM) defines a six-step cycle of safety management processes. This report evaluates the : Conncituct Department on how well conform to the six safety management steps. The methods recommended in the HSM : require additional...
Trace mineral feeding and assessment.
Swecker, William S
2014-11-01
This article gives practitioners an overview of trace mineral requirements, supplementation, and assessment in dairy herds. In addition, a step-by-step guideline for liver biopsy in cows is provided with interpretive results from a sample herd. Copyright © 2014 Elsevier Inc. All rights reserved.
Full-waveform data for building roof step edge localization
NASA Astrophysics Data System (ADS)
Słota, Małgorzata
2015-08-01
Airborne laser scanning data perfectly represent flat or gently sloped areas; to date, however, accurate breakline detection is the main drawback of this technique. This issue becomes particularly important in the case of modeling buildings, where accuracy higher than the footprint size is often required. This article covers several issues related to full-waveform data registered on building step edges. First, the full-waveform data simulator was developed and presented in this paper. Second, this article provides a full description of the changes in echo amplitude, echo width and returned power caused by the presence of edges within the laser footprint. Additionally, two important properties of step edge echoes, peak shift and echo asymmetry, were noted and described. It was shown that these properties lead to incorrect echo positioning along the laser center line and can significantly reduce the edge points' accuracy. For these reasons and because all points are aligned with the center of the beam, regardless of the actual target position within the beam footprint, we can state that step edge points require geometric corrections. This article presents a novel algorithm for the refinement of step edge points. The main distinguishing advantage of the developed algorithm is the fact that none of the additional data, such as emitted signal parameters, beam divergence, approximate edge geometry or scanning settings, are required. The proposed algorithm works only on georeferenced profiles of reflected laser energy. Another major advantage is the simplicity of the calculation, allowing for very efficient data processing. Additionally, the developed method of point correction allows for the accurate determination of points lying on edges and edge point densification. For this reason, fully automatic localization of building roof step edges based on LiDAR full-waveform data with higher accuracy than the size of the lidar footprint is feasible.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 17 2014-07-01 2014-07-01 false What are the additional requirements... knowledge and belief after I have taken reasonable and appropriate steps to verify the accuracy thereof. I... States Code, section 1001, the penalty for furnishing false, incomplete or misleading information in this...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false What are the additional requirements... knowledge and belief after I have taken reasonable and appropriate steps to verify the accuracy thereof. I... States Code, section 1001, the penalty for furnishing false, incomplete or misleading information in this...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false What are the additional requirements... knowledge and belief after I have taken reasonable and appropriate steps to verify the accuracy thereof. I... States Code, section 1001, the penalty for furnishing false, incomplete or misleading information in this...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 16 2011-07-01 2011-07-01 false What are the additional requirements... knowledge and belief after I have taken reasonable and appropriate steps to verify the accuracy thereof. I... States Code, section 1001, the penalty for furnishing false, incomplete or misleading information in this...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 17 2013-07-01 2013-07-01 false What are the additional requirements... knowledge and belief after I have taken reasonable and appropriate steps to verify the accuracy thereof. I... States Code, section 1001, the penalty for furnishing false, incomplete or misleading information in this...
Step 1: C3 Flight Demo Data Analysis Plan
NASA Technical Reports Server (NTRS)
2005-01-01
The Data Analysis Plan (DAP) describes the data analysis that the C3 Work Package (WP) will perform in support of the Access 5 Step 1 C3 flight demonstration objectives as well as the processes that will be used by the Flight IPT to gather and distribute the data collected to satisfy those objectives. In addition to C3 requirements, this document will encompass some Human Systems Interface (HSI) requirements in performing the C3 flight demonstrations. The C3 DAP will be used as the primary interface requirements document between the C3 Work Package and Flight Test organizations (Flight IPT and Non-Access 5 Flight Programs). In addition to providing data requirements for Access 5 flight test (piggyback technology demonstration flights, dedicated C3 technology demonstration flights, and Airspace Operations Demonstration flights), the C3 DAP will be used to request flight data from Non- Access 5 flight programs for C3 related data products
46 CFR 163.003-13 - Construction.
Code of Federal Regulations, 2010 CFR
2010-10-01
... that can be used for attaching additional ladder sections. (c) Steps. Pilot ladder steps must meet the... instead of the orange color required under paragraph (c)(8) of this section, and must have the special.... A pilot ladder must not have splinters, burrs, sharp edges, corners, projections, or other defects...
Immobilization techniques to avoid enzyme loss from oxidase-based biosensors: a one-year study.
House, Jody L; Anderson, Ellen M; Ward, W Kenneth
2007-01-01
Continuous amperometric sensors that measure glucose or lactate require a stable sensitivity, and glutaraldehyde crosslinking has been used widely to avoid enzyme loss. Nonetheless, little data is published on the effectiveness of enzyme immobilization with glutaraldehyde. A combination of electrochemical testing and spectrophotometric assays was used to study the relationship between enzyme shedding and the fabrication procedure. In addition, we studied the relationship between the glutaraldehyde concentration and sensor performance over a period of one year. The enzyme immobilization process by glutaraldehyde crosslinking to glucose oxidase appears to require at least 24-hours at room temperature to reach completion. In addition, excess free glucose oxidase can be removed by soaking sensors in purified water for 20 minutes. Even with the addition of these steps, however, it appears that there is some free glucose oxidase entrapped within the enzyme layer which contributes to a decline in sensitivity over time. Although it reduces the ultimate sensitivity (probably via a change in the enzyme's natural conformation), glutaraldehyde concentration in the enzyme layer can be increased in order to minimize this instability. After exposure of oxidase enzymes to glutaraldehyde, effective crosslinking requires a rinse step and a 24-hour incubation step. In order to minimize the loss of sensor sensitivity over time, the glutaraldehyde concentration can be increased.
Observational study of treatment space in individual neonatal cot spaces.
Hignett, Sue; Lu, Jun; Fray, Mike
2010-01-01
Technology developments in neonatal intensive care units have increased the spatial requirements for clinical activities. Because the effectiveness of healthcare delivery is determined in part by the design of the physical environment and the spatial organization of work, it is appropriate to apply an evidence-based approach to architectural design. This study aimed to provide empirical evidence of the spatial requirements for an individual cot or incubator space. Observational data from 2 simulation exercises were combined with an expert review to produce a final recommendation. A validated 5-step protocol was used to collect data. Step 1 defined the clinical specialty and space. In step 2, data were collected with 28 staff members and 15 neonates to produce a simulation scenario representing the frequent and safety-critical activities. In step 3, 21 staff members participated in functional space experiments to determine the average spatial requirements. Step 4 incorporated additional data (eg, storage and circulation) to produce a spatial recommendation. Finally, the recommendation was reviewed in step 5 by a national expert clinical panel to consider alternative layouts and technology. The average space requirement for an individual neonatal intensive care unit cot (incubator) space was 13.5 m2 (or 145.3 ft2). The circulation and storage space requirements added in step 4 increased this to 18.46 m2 (or 198.7 ft2). The expert panel reviewed the recommendation and agreed that the average individual cot space (13.5 m2/[or 145.3 ft2]) would accommodate variance in working practices. Care needs to be taken when extrapolating this recommendation to multiple cot areas to maintain the minimum spatial requirement.
User Instructions for the Policy Analysis Modeling System (PAMS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
McNeil, Michael A.; Letschert, Virginie E.; Van Buskirk, Robert D.
PAMS uses country-specific and product-specific data to calculate estimates of impacts of a Minimum Efficiency Performance Standard (MEPS) program. The analysis tool is self-contained in a Microsoft Excel spreadsheet, and requires no links to external data, or special code additions to run. The analysis can be customized to a particular program without additional user input, through the use of the pull-down menus located on the Summary page. In addition, the spreadsheet contains many areas into which user-generated input data can be entered for increased accuracy of projection. The following is a step-by-step guide for using and customizing the tool.
Immobilization Techniques to Avoid Enzyme Loss from Oxidase-Based Biosensors: A One-Year Study
House, Jody L.; Anderson, Ellen M.; Ward, W. Kenneth
2007-01-01
Background Continuous amperometric sensors that measure glucose or lactate require a stable sensitivity, and glutaraldehyde crosslinking has been used widely to avoid enzyme loss. Nonetheless, little data is published on the effectiveness of enzyme immobilization with glutaraldehyde. Methods A combination of electrochemical testing and spectrophotometric assays was used to study the relationship between enzyme shedding and the fabrication procedure. In addition, we studied the relationship between the glutaraldehyde concentration and sensor performance over a period of one year. Results The enzyme immobilization process by glutaraldehyde crosslinking to glucose oxidase appears to require at least 24-hours at room temperature to reach completion. In addition, excess free glucose oxidase can be removed by soaking sensors in purified water for 20 minutes. Even with the addition of these steps, however, it appears that there is some free glucose oxidase entrapped within the enzyme layer which contributes to a decline in sensitivity over time. Although it reduces the ultimate sensitivity (probably via a change in the enzyme's natural conformation), glutaraldehyde concentration in the enzyme layer can be increased in order to minimize this instability. Conclusions After exposure of oxidase enzymes to glutaraldehyde, effective crosslinking requires a rinse step and a 24-hour incubation step. In order to minimize the loss of sensor sensitivity over time, the glutaraldehyde concentration can be increased. PMID:19888375
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edgar, Thomas W.; Hadley, Mark D.; Manz, David O.
This document provides the methods to secure routable control system communication in the electric sector. The approach of this document yields a long-term vision for a future of secure communication, while also providing near term steps and a roadmap. The requirements for the future secure control system environment were spelled out to provide a final target. Additionally a survey and evaluation of current protocols was used to determine if any existing technology could achieve this goal. In the end a four-step path was described that brought about increasing requirement completion and culminates in the realization of the long term vision.
Short bowel mucosal morphology, proliferation and inflammation at first and repeat STEP procedures.
Mutanen, Annika; Barrett, Meredith; Feng, Yongjia; Lohi, Jouko; Rabah, Raja; Teitelbaum, Daniel H; Pakarinen, Mikko P
2018-04-17
Although serial transverse enteroplasty (STEP) improves function of dilated short bowel, a significant proportion of patients require repeat surgery. To address underlying reasons for unsuccessful STEP, we compared small intestinal mucosal characteristics between initial and repeat STEP procedures in children with short bowel syndrome (SBS). Fifteen SBS children, who underwent 13 first and 7 repeat STEP procedures with full thickness small bowel samples at median age 1.5 years (IQR 0.7-3.7) were included. The specimens were analyzed histologically for mucosal morphology, inflammation and muscular thickness. Mucosal proliferation and apoptosis was analyzed with MIB1 and Tunel immunohistochemistry. Median small bowel length increased 42% by initial STEP and 13% by repeat STEP (p=0.05), while enteral caloric intake increased from 6% to 36% (p=0.07) during 14 (12-42) months between the procedures. Abnormal mucosal inflammation was frequently observed both at initial (69%) and additional STEP (86%, p=0.52) surgery. Villus height, crypt depth, enterocyte proliferation and apoptosis as well as muscular thickness were comparable at first and repeat STEP (p>0.05 for all). Patients, who required repeat STEP tended to be younger (p=0.057) with less apoptotic crypt cells (p=0.031) at first STEP. Absence of ileocecal valve associated with increased intraepithelial leukocyte count and reduced crypt cell proliferation index (p<0.05 for both). No adaptive mucosal hyperplasia or muscular alterations occurred between first and repeat STEP. Persistent inflammation and lacking mucosal growth may contribute to continuing bowel dysfunction in SBS children, who require repeat STEP procedure, especially after removal of the ileocecal valve. Level IV, retrospective study. Copyright © 2018 Elsevier Inc. All rights reserved.
Consideration of drainage ditches and sediment rating cure on SWAT model performance
USDA-ARS?s Scientific Manuscript database
Water quality models most often require a considerable amount of data to be properly configured and in some cases this requires additional procedural steps prior to model applications. We examined two different scenarios of such input issues in a small watershed using the Soil and Water Assessment ...
Borsari, Brian; Hustad, John T.P.; Mastroleo, Nadine R.; Tevyaw, Tracy O’Leary; Barnett, Nancy P.; Kahler, Christopher W.; Short, Erica Eaton; Monti, Peter M.
2012-01-01
Objective Over the past two decades, colleges and universities have seen a large increase in the number of students referred to the administration for alcohol policies violations. However, a substantial portion of mandated students may not require extensive treatment. Stepped care may maximize treatment efficiency and greatly reduce the demands on campus alcohol programs. Method Participants in the study (N = 598) were college students mandated to attend an alcohol program following a campus-based alcohol citation. All participants received Step 1: a 15-minute Brief Advice session that included the provision of a booklet containing advice to reduce drinking. Participants were assessed six weeks after receiving the Brief Advice, and those who continued to exhibit risky alcohol use (n = 405) were randomized to Step 2, a 60–90 minute brief motivational intervention (BMI) (n = 211) or an assessment-only control (n = 194). Follow-up assessments were conducted 3, 6, and 9 months after Step 2. Results Results indicated that the participants who received a BMI significantly reduced the number of alcohol-related problems compared to those who received assessment-only, despite no significant group differences in alcohol use. In addition, low risk drinkers (n = 102; who reported low alcohol use and related harms at 6-week follow-up and were not randomized to stepped care) showed a stable alcohol use pattern throughout the follow-up period, indicating they required no additional intervention. Conclusion Stepped care is an efficient and cost-effective method to reduce harms associated with alcohol use by mandated students. PMID:22924334
Stepping strategies for regulating gait adaptability and stability.
Hak, Laura; Houdijk, Han; Steenbrink, Frans; Mert, Agali; van der Wurff, Peter; Beek, Peter J; van Dieën, Jaap H
2013-03-15
Besides a stable gait pattern, gait in daily life requires the capability to adapt this pattern in response to environmental conditions. The purpose of this study was to elucidate the anticipatory strategies used by able-bodied people to attain an adaptive gait pattern, and how these strategies interact with strategies used to maintain gait stability. Ten healthy subjects walked in a Computer Assisted Rehabilitation ENvironment (CAREN). To provoke an adaptive gait pattern, subjects had to hit virtual targets, with markers guided by their knees, while walking on a self-paced treadmill. The effects of walking with and without this task on walking speed, step length, step frequency, step width and the margins of stability (MoS) were assessed. Furthermore, these trials were performed with and without additional continuous ML platform translations. When an adaptive gait pattern was required, subjects decreased step length (p<0.01), tended to increase step width (p=0.074), and decreased walking speed while maintaining similar step frequency compared to unconstrained walking. These adaptations resulted in the preservation of equal MoS between trials, despite the disturbing influence of the gait adaptability task. When the gait adaptability task was combined with the balance perturbation subjects further decreased step length, as evidenced by a significant interaction between both manipulations (p=0.012). In conclusion, able-bodied people reduce step length and increase step width during walking conditions requiring a high level of both stability and adaptability. Although an increase in step frequency has previously been found to enhance stability, a faster movement, which would coincide with a higher step frequency, hampers accuracy and may consequently limit gait adaptability. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Bourkland, Kristin L.; Liu, Kuo-Chia
2011-01-01
The Solar Dynamics Observatory (SDO), launched in 2010, is a NASA-designed spacecraft built to study the Sun. SDO has tight pointing requirements and instruments that are sensitive to spacecraft jitter. Two High Gain Antennas (HGAs) are used to continuously send science data to a dedicated ground station. Preflight analysis showed that jitter resulting from motion of the HGAs was a cause for concern. Three jitter mitigation techniques were developed and implemented to overcome effects of jitter from different sources. These mitigation techniques include: the random step delay, stagger stepping, and the No Step Request (NSR). During the commissioning phase of the mission, a jitter test was performed onboard the spacecraft, in which various sources of jitter were examined to determine their level of effect on the instruments. During the HGA portion of the test, the jitter amplitudes from the single step of a gimbal were examined, as well as the amplitudes due to the execution of various gimbal rates. The jitter levels were compared with the gimbal jitter allocations for each instrument. The decision was made to consider implementing two of the jitter mitigating techniques on board the spacecraft: stagger stepping and the NSR. Flight data with and without jitter mitigation enabled was examined, and it is shown in this paper that HGA tracking is not negatively impacted with the addition of the jitter mitigation techniques. Additionally, the individual gimbal steps were examined, and it was confirmed that the stagger stepping and NSRs worked as designed. An Image Quality Test was performed to determine the amount of cumulative jitter from the reaction wheels, HGAs, and instruments during various combinations of typical operations. The HGA-induced jitter on the instruments is well within the jitter requirement when the stagger step and NSR mitigation options are enabled.
Space Station tethered waste disposal
NASA Technical Reports Server (NTRS)
Rupp, Charles C.
1988-01-01
The Shuttle Transportation System (STS) launches more payload to the Space Station than can be returned creating an accumulation of waste. Several methods of deorbiting the waste are compared including an OMV, solid rocket motors, and a tether system. The use of tethers is shown to offer the unique potential of having a net savings in STS launch requirement. Tether technology is being developed which can satisfy the deorbit requirements but additional effort is required in waste processing, packaging, and container design. The first step in developing this capability is already underway in the Small Expendable Deployer System program. A developmental flight test of a tether initiated recovery system is seen as the second step in the evolution of this capability.
Phase-step retrieval for tunable phase-shifting algorithms
NASA Astrophysics Data System (ADS)
Ayubi, Gastón A.; Duarte, Ignacio; Perciante, César D.; Flores, Jorge L.; Ferrari, José A.
2017-12-01
Phase-shifting (PS) is a well-known technique for phase retrieval in interferometry, with applications in deflectometry and 3D-profiling, which requires a series of intensity measurements with certain phase-steps. Usually the phase-steps are evenly spaced, and its knowledge is crucial for the phase retrieval. In this work we present a method to extract the phase-step between consecutive interferograms. We test the proposed technique with images corrupted by additive noise. The results were compared with other known methods. We also present experimental results showing the performance of the method when spatial filters are applied to the interferograms and the effect that they have on their relative phase-steps.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-06
... final product requirements) Proposed Food Additive Provisions in Standards for Fish and Fishery Products... (held at Step 7) Section 4 Food Additives Draft Standard for Quick Frozen Scallop Adductor Muscle Meat... DEPARTMENT OF AGRICULTURE Food Safety and Inspection Service [Docket No. FSIS-2012-0035] Codex...
Impaired Response Selection During Stepping Predicts Falls in Older People-A Cohort Study.
Schoene, Daniel; Delbaere, Kim; Lord, Stephen R
2017-08-01
Response inhibition, an important executive function, has been identified as a risk factor for falls in older people. This study investigated whether step tests that include different levels of response inhibition differ in their ability to predict falls and whether such associations are mediated by measures of attention, speed, and/or balance. A cohort study with a 12-month follow-up was conducted in community-dwelling older people without major cognitive and mobility impairments. Participants underwent 3 step tests: (1) choice stepping reaction time (CSRT) requiring rapid decision making and step initiation; (2) inhibitory choice stepping reaction time (iCSRT) requiring additional response inhibition and response-selection (go/no-go); and (3) a Stroop Stepping Test (SST) under congruent and incongruent conditions requiring conflict resolution. Participants also completed tests of processing speed, balance, and attention as potential mediators. Ninety-three of the 212 participants (44%) fell in the follow-up period. Of the step tests, only components of the iCSRT task predicted falls in this time with the relative risk per standard deviation for the reaction time (iCSRT-RT) = 1.23 (95%CI = 1.10-1.37). Multiple mediation analysis indicated that the iCSRT-RT was independently associated with falls and not mediated through slow processing speed, poor balance, or inattention. Combined stepping and response inhibition as measured in a go/no-go test stepping paradigm predicted falls in older people. This suggests that integrity of the response-selection component of a voluntary stepping response is crucial for minimizing fall risk. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
Automatic sequencing and control of Space Station airlock operations
NASA Technical Reports Server (NTRS)
Himel, Victor; Abeles, Fred J.; Auman, James; Tqi, Terry O.
1989-01-01
Procedures that have been developed as part of the NASA JSC-sponsored pre-prototype Checkout, Servicing and Maintenance (COSM) program for pre- and post-EVA airlock operations are described. This paper addresses the accompanying pressure changes in the airlock and in the Advanced Extravehicular Mobility Unit (EMU). Additionally, the paper focuses on the components that are checked out, and includes the step-by-step sequences to be followed by the crew, the required screen displays and prompts that accompany each step, and a description of the automated processes that occur.
Zhang, Bin; Seong, Baekhoon; Lee, Jaehyun; Nguyen, VuDat; Cho, Daehyun; Byun, Doyoung
2017-09-06
A one-step sub-micrometer-scale electrohydrodynamic (EHD) inkjet three-dimensional (3D)-printing technique that is based on the drop-on-demand (DOD) operation for which an additional postsintering process is not required is proposed. Both the numerical simulation and the experimental observations proved that nanoscale Joule heating occurs at the interface between the charged silver nanoparticles (Ag-NPs) because of the high electrical contact resistance during the printing process; this is the reason why an additional postsintering process is not required. Sub-micrometer-scale 3D structures were printed with an above-35 aspect ratio via the use of the proposed printing technique; furthermore, it is evident that the designed 3D structures such as a bridge-like shape can be printed with the use of the proposed printing technique, allowing for the cost-effective fabrication of a 3D touch sensor and an ultrasensitive air flow-rate sensor. It is believed that the proposed one-step printing technique may replace the conventional 3D conductive-structure printing techniques for which a postsintering process is used because of its economic efficiency.
Running DNA Mini-Gels in 20 Minutes or Less Using Sodium Boric Acid Buffer
ERIC Educational Resources Information Center
Jenkins, Kristin P.; Bielec, Barbara
2006-01-01
Providing a biotechnology experience for students can be challenging on several levels, and time is a real constraint for many experiments. Many DNA based methods require a gel electrophoresis step, and although some biotechnology procedures have convenient break points, gel electrophoresis does not. In addition to the time required for loading…
NASA Astrophysics Data System (ADS)
Kopielski, Andreas; Schneider, Anne; Csáki, Andrea; Fritzsche, Wolfgang
2015-01-01
The DNA origami technique offers great potential for nanotechnology. Using biomolecular self-assembly, defined 2D and 3D nanoscale DNA structures can be realized. DNA origami allows the positioning of proteins, fluorophores or nanoparticles with an accuracy of a few nanometers and enables thereby novel nanoscale devices. Origami assembly usually includes a thermal denaturation step at 90 °C. Additional components used for nanoscale assembly (such as proteins) are often thermosensitive, and possibly damaged by such harsh conditions. They have therefore to be attached in an extra second step to avoid defects. To enable a streamlined one-step nanoscale synthesis - a so called one-pot folding - an adaptation of the folding procedures is required. Here we present a thermal optimization of this process for a 2D DNA rectangle-shaped origami resulting in an isothermal assembly protocol below 60 °C without thermal denaturation. Moreover, a room temperature protocol is presented using the chemical additive betaine, which is biocompatible in contrast to chemical denaturing approaches reported previously.The DNA origami technique offers great potential for nanotechnology. Using biomolecular self-assembly, defined 2D and 3D nanoscale DNA structures can be realized. DNA origami allows the positioning of proteins, fluorophores or nanoparticles with an accuracy of a few nanometers and enables thereby novel nanoscale devices. Origami assembly usually includes a thermal denaturation step at 90 °C. Additional components used for nanoscale assembly (such as proteins) are often thermosensitive, and possibly damaged by such harsh conditions. They have therefore to be attached in an extra second step to avoid defects. To enable a streamlined one-step nanoscale synthesis - a so called one-pot folding - an adaptation of the folding procedures is required. Here we present a thermal optimization of this process for a 2D DNA rectangle-shaped origami resulting in an isothermal assembly protocol below 60 °C without thermal denaturation. Moreover, a room temperature protocol is presented using the chemical additive betaine, which is biocompatible in contrast to chemical denaturing approaches reported previously. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr04176c
Guiding gate-etch process development using 3D surface reaction modeling for 7nm and beyond
NASA Astrophysics Data System (ADS)
Dunn, Derren; Sporre, John R.; Deshpande, Vaibhav; Oulmane, Mohamed; Gull, Ronald; Ventzek, Peter; Ranjan, Alok
2017-03-01
Increasingly, advanced process nodes such as 7nm (N7) are fundamentally 3D and require stringent control of critical dimensions over high aspect ratio features. Process integration in these nodes requires a deep understanding of complex physical mechanisms to control critical dimensions from lithography through final etch. Polysilicon gate etch processes are critical steps in several device architectures for advanced nodes that rely on self-aligned patterning approaches to gate definition. These processes are required to meet several key metrics: (a) vertical etch profiles over high aspect ratios; (b) clean gate sidewalls free of etch process residue; (c) minimal erosion of liner oxide films protecting key architectural elements such as fins; and (e) residue free corners at gate interfaces with critical device elements. In this study, we explore how hybrid modeling approaches can be used to model a multi-step finFET polysilicon gate etch process. Initial parts of the patterning process through hardmask assembly are modeled using process emulation. Important aspects of gate definition are then modeled using a particle Monte Carlo (PMC) feature scale model that incorporates surface chemical reactions.1 When necessary, species and energy flux inputs to the PMC model are derived from simulations of the etch chamber. The modeled polysilicon gate etch process consists of several steps including a hard mask breakthrough step (BT), main feature etch steps (ME), and over-etch steps (OE) that control gate profiles at the gate fin interface. An additional constraint on this etch flow is that fin spacer oxides are left intact after final profile tuning steps. A natural optimization required from these processes is to maximize vertical gate profiles while minimizing erosion of fin spacer films.2
Control of Structure in Turbulent Flows: Bifurcating and Blooming Jets.
1987-10-10
injected through computational boundaries. (2) to satisfy no- slip boundary conditions or (3) during ’ grid " refinement when one element may be split...use of fast Poisson solvers on a mesh of M grid points, the operation count for this step can approach 0(M log M). Additional required steps are (1...consider s- three-dimensionai perturbations to the uart vortices. The linear stability calculations ot Pierrehumbert & Widnadl [101 are available for
Selective catalytic two-step process for ethylene glycol from carbon monoxide
Dong, Kaiwu; Elangovan, Saravanakumar; Sang, Rui; Spannenberg, Anke; Jackstell, Ralf; Junge, Kathrin; Li, Yuehui; Beller, Matthias
2016-01-01
Upgrading C1 chemicals (for example, CO, CO/H2, MeOH and CO2) with C–C bond formation is essential for the synthesis of bulk chemicals. In general, these industrially important processes (for example, Fischer Tropsch) proceed at drastic reaction conditions (>250 °C; high pressure) and suffer from low selectivity, which makes high capital investment necessary and requires additional purifications. Here, a different strategy for the preparation of ethylene glycol (EG) via initial oxidative coupling and subsequent reduction is presented. Separating coupling and reduction steps allows for a completely selective formation of EG (99%) from CO. This two-step catalytic procedure makes use of a Pd-catalysed oxycarbonylation of amines to oxamides at room temperature (RT) and subsequent Ru- or Fe-catalysed hydrogenation to EG. Notably, in the first step the required amines can be efficiently reused. The presented stepwise oxamide-mediated coupling provides the basis for a new strategy for selective upgrading of C1 chemicals. PMID:27377550
Hot working behavior of selective laser melted and laser metal deposited Inconel 718
NASA Astrophysics Data System (ADS)
Bambach, Markus; Sizova, Irina
2018-05-01
The production of Nickel-based high-temperature components is of great importance for the transport and energy sector. Forging of high-temperature alloys often requires expensive dies, multiple forming steps and leads to forged parts with tolerances that require machining to create the final shape and a large amount of scrap. Additive manufacturing offers the possibility to print the desired shapes directly as net-shape components, requiring only little additional effort in machining. Especially for high-temperature alloys carrying a large amount of energy per unit mass, additive manufacturing could be more energy-efficient than forging if the energy contained in the machining scrap exceeds the energy needed for powder production and laser processing. However, the microstructure and performance of 3d-printed parts will not reach the level of forged material unless further expensive processes such as hot-isostatic pressing are used. Using the design freedom and possibilities to locally engineer material, additive manufacturing could be combined with forging operations to novel process chains, offering the possibility to reduce the number of forging steps and to create near-net shape forgings with desired local properties. Some innovative process chains combining additive manufacturing and forging have been patented recently, but almost no scientific knowledge on the workability of 3D printed preforms exists. The present study investigates the flow stress and microstructure evolution during hot working of pre-forms produced by laser powder deposition and selective laser melting (Figure 1) and puts forward a model for the flow stress.
Li, Hui; Li, Kang-shuai; Su, Jing; Chen, Lai-Zhong; Xu, Yun-Fei; Wang, Hong-Mei; Gong, Zheng; Cui, Guo-Ying; Yu, Xiao; Wang, Kai; Yao, Wei; Xin, Tao; Li, Min-Yong; Xiao, Kun-Hong; An, Xiao-fei; Huo, Yuqing; Xu, Zhi-gang; Sun, Jin-Peng; Pang, Qi
2013-01-01
Striatal-enriched tyrosine phosphatase (STEP) is an important regulator of neuronal synaptic plasticity, and its abnormal level or activity contributes to cognitive disorders. One crucial downstream effector and direct substrate of STEP is extracellular signal-regulated protein kinase (ERK), which has important functions in spine stabilisation and action potential transmission. The inhibition of STEP activity toward phospho-ERK has the potential to treat neuronal diseases, but the detailed mechanism underlying the dephosphorylation of phospho-ERK by STEP is not known. Therefore, we examined STEP activity toward pNPP, phospho-tyrosine-containing peptides, and the full-length phospho-ERK protein using STEP mutants with different structural features. STEP was found to be a highly efficient ERK tyrosine phosphatase that required both its N-terminal regulatory region and key residues in its active site. Specifically, both KIM and KIS of STEP were required for ERK interaction. In addition to the N-terminal KIS region, S245, hydrophobic residues L249/L251, and basic residues R242/R243 located in the KIM region were important in controlling STEP activity toward phospho-ERK. Further kinetic experiments revealed subtle structural differences between STEP and HePTP that affected the interactions of their KIMs with ERK. Moreover, STEP recognised specific positions of a phospho-ERK peptide sequence through its active site, and the contact of STEP F311 with phospho-ERK V205 and T207 were crucial interactions. Taken together, our results not only provide the information for interactions between ERK and STEP, but will also help in the development of specific strategies to target STEP-ERK recognition, which could serve as a potential therapy for neurological disorders. PMID:24117863
Parra-Millán, Raquel; Guerrero-Gómez, David; Ayerbe-Algaba, Rafael; Pachón-Ibáñez, Maria Eugenia; Miranda-Vizuete, Antonio
2018-01-01
ABSTRACT Acinetobacter baumannii is a significant human pathogen associated with hospital-acquired infections. While adhesion, an initial and important step in A. baumannii infection, is well characterized, the intracellular trafficking of this pathogen inside host cells remains poorly studied. Here, we demonstrate that transcription factor EB (TFEB) is activated after A. baumannii infection of human lung epithelial cells (A549). We also show that TFEB is required for the invasion and persistence inside A549 cells. Consequently, lysosomal biogenesis and autophagy activation were observed after TFEB activation which could increase the death of A549 cells. In addition, using the Caenorhabditis elegans infection model by A. baumannii, the TFEB orthologue HLH-30 was required for survival of the nematode to infection, although nuclear translocation of HLH-30 was not required. These results identify TFEB as a conserved key factor in the pathogenesis of A. baumannii. IMPORTANCE Adhesion is an initial and important step in Acinetobacter baumannii infections. However, the mechanism of entrance and persistence inside host cells is unclear and remains to be understood. In this study, we report that, in addition to its known role in host defense against Gram-positive bacterial infection, TFEB also plays an important role in the intracellular trafficking of A. baumannii in host cells. TFEB was activated shortly after A. baumannii infection and is required for its persistence within host cells. Additionally, using the C. elegans infection model by A. baumannii, the TFEB orthologue HLH-30 was required for survival of the nematode to infection, although nuclear translocation of HLH-30 was not required. PMID:29600279
Masuya, Yoshihiro; Baba, Katsuaki
2016-01-01
A new process has been developed for the palladium(ii)-catalyzed synthesis of dibenzothiophene derivatives via the cleavage of C–H and C–S bonds. In contrast to the existing methods for the synthesis of this scaffold by C–H functionalization, this new catalytic C–H/C–S coupling method does not require the presence of an external stoichiometric oxidant or reactive functionalities such as C–X or S–H, allowing its application to the synthesis of elaborate π-systems. Notably, the product-forming step of this reaction lies in an oxidative addition step rather than a reductive elimination step, making this reaction mechanistically uncommon. PMID:28660030
Seidner, Douglas L; Fujioka, Ken; Boullata, Joseph I; Iyer, Kishore; Lee, Hak-Myung; Ziegler, Thomas R
2018-05-15
Patients with intestinal failure associated with short bowel syndrome (SBS-IF) require parenteral support (PS) to maintain fluid balance or nutrition. Teduglutide (TED) reduced PS requirements in patients with SBS-IF in the randomized, placebo (PBO)-controlled STEPS study (NCT00798967) and its 2-year, open-label extension, STEPS-2 (NCT00930644). STEPS-3 (NCT01560403), a 1-year, open-label extension study in patients with SBS-IF who completed STEPS-2, further monitored the safety and efficacy of TED (0.05 mg/kg/day). Baseline was the start of TED treatment, in either STEPS or STEPS-2. At the end of STEPS-3, patients treated with TED in both STEPS and STEPS-2 (TED-TED) received TED for ≤42 months, and patients treated with TED only in STEPS-2 (no TED treatment [NT]/PBO-TED) received TED for ≤36 months. Fourteen patients enrolled (TED-TED, n = 5; NT/PBO-TED, n = 9) and 13 completed STEPS-3. At the last dosing visit, mean (SD) PS was reduced from baseline by 9.8 (14.4 [50%]) and 3.9 (2.8 [48%]) L/week in TED-TED and NT/PBO-TED, respectively. Mean (SD) PS infusions decreased by 3.0 (4.6) and 2.1 (2.2) days per week from baseline in TED-TED and NT/PBO-TED, respectively. Two patients achieved PS independence; 2 additional patients who achieved independence in STEPS-2 maintained enteral autonomy throughout STEPS-3. All patients reported ≥1 treatment-emergent adverse event (TEAE); 3 patients had TEAEs that were reported as treatment related. No patient had a treatment-related treatment-emergent serious AE. Long-term TED treatment yielded a safety profile consistent with previous studies, sustained efficacy, and a further decline in PS requirements. © 2018 The Authors. Nutrition in Clinical Practice published by Wiley Periodicals, Inc. on behalf of American Society for Parenteral and Enteral Nutrition.
USE OF TAQMAN TO ENUMERATE ENTEROCOCCUS FAECALIS IN WATER
The Polymerase Chain Reaction (PCR) has become a useful tool in the detection of microorganisms. However, conventional PCR is somewhat time-consuming considering that additional steps (e.g., gel electrophoresis and gene sequencing) are required to confirm the presence of the tar...
Proposed Conceptual Requirements for the CTBT Knowledge Base,
1995-08-14
knowledge available to automated processing routines and human analysts are significant, and solving these problems is an essential step in ensuring...knowledge storage in a CTBT system. In addition to providing regional knowledge to automated processing routines, the knowledge base will also address
Interactive real time flow simulations
NASA Technical Reports Server (NTRS)
Sadrehaghighi, I.; Tiwari, S. N.
1990-01-01
An interactive real time flow simulation technique is developed for an unsteady channel flow. A finite-volume algorithm in conjunction with a Runge-Kutta time stepping scheme was developed for two-dimensional Euler equations. A global time step was used to accelerate convergence of steady-state calculations. A raster image generation routine was developed for high speed image transmission which allows the user to have direct interaction with the solution development. In addition to theory and results, the hardware and software requirements are discussed.
Marckmann, G; In der Schmitten, J
2014-05-01
Under the current conditions in the health care system, physicians inevitably have to take responsibility for the cost dimension of their decisions on the level of single cases. This article, therefore, discusses the question how physicians can integrate cost considerations into their clinical decisions at the microlevel in a medically rational and ethically justified way. We propose a four-step model for "ethical cost-consciousness": (1) forego ineffective interventions as required by good evidence-based medicine, (2) respect individual patient preferences, (3) minimize the diagnostic and therapeutic effort to achieve a certain treatment goal, and (4) forego expensive interventions that have only a small or unlikely (net) benefit for the patient. Steps 1-3 are ethically justified by the principles of beneficence, nonmaleficence, and respect for autonomy, step 4 by the principles of justice. For decisions on step 4, explicit cost-conscious guidelines should be developed locally or regionally. Following the four-step model can contribute to ethically defensible, cost-conscious decision-making at the microlevel. In addition, physicians' rationing decisions should meet basic standards of procedural fairness. Regular cost-case discussions and clinical ethics consultation should be available as decision support. Implementing step 4, however, requires first of all a clear political legitimation with the corresponding legal framework.
Non-rigid CT/CBCT to CBCT registration for online external beam radiotherapy guidance
NASA Astrophysics Data System (ADS)
Zachiu, Cornel; de Senneville, Baudouin Denis; Tijssen, Rob H. N.; Kotte, Alexis N. T. J.; Houweling, Antonetta C.; Kerkmeijer, Linda G. W.; Lagendijk, Jan J. W.; Moonen, Chrit T. W.; Ries, Mario
2018-01-01
Image-guided external beam radiotherapy (EBRT) allows radiation dose deposition with a high degree of accuracy and precision. Guidance is usually achieved by estimating the displacements, via image registration, between cone beam computed tomography (CBCT) and computed tomography (CT) images acquired at different stages of the therapy. The resulting displacements are then used to reposition the patient such that the location of the tumor at the time of treatment matches its position during planning. Moreover, ongoing research aims to use CBCT-CT image registration for online plan adaptation. However, CBCT images are usually acquired using a small number of x-ray projections and/or low beam intensities. This often leads to the images being subject to low contrast, low signal-to-noise ratio and artifacts, which ends-up hampering the image registration process. Previous studies addressed this by integrating additional image processing steps into the registration procedure. However, these steps are usually designed for particular image acquisition schemes, therefore limiting their use on a case-by-case basis. In the current study we address CT to CBCT and CBCT to CBCT registration by the means of the recently proposed EVolution registration algorithm. Contrary to previous approaches, EVolution does not require the integration of additional image processing steps in the registration scheme. Moreover, the algorithm requires a low number of input parameters, is easily parallelizable and provides an elastic deformation on a point-by-point basis. Results have shown that relative to a pure CT-based registration, the intrinsic artifacts present in typical CBCT images only have a sub-millimeter impact on the accuracy and precision of the estimated deformation. In addition, the algorithm has low computational requirements, which are compatible with online image-based guidance of EBRT treatments.
3D freeform printing of silk fibroin.
Rodriguez, Maria J; Dixon, Thomas A; Cohen, Eliad; Huang, Wenwen; Omenetto, Fiorenzo G; Kaplan, David L
2018-04-15
Freeform fabrication has emerged as a key direction in printing biologically-relevant materials and structures. With this emerging technology, complex structures with microscale resolution can be created in arbitrary geometries and without the limitations found in traditional bottom-up or top-down additive manufacturing methods. Recent advances in freeform printing have used the physical properties of microparticle-based granular gels as a medium for the submerged extrusion of bioinks. However, most of these techniques require post-processing or crosslinking for the removal of the printed structures (Miller et al., 2015; Jin et al., 2016) [1,2]. In this communication, we introduce a novel method for the one-step gelation of silk fibroin within a suspension of synthetic nanoclay (Laponite) and polyethylene glycol (PEG). Silk fibroin has been used as a biopolymer for bioprinting in several contexts, but chemical or enzymatic additives or bulking agents are needed to stabilize 3D structures. Our method requires no post-processing of printed structures and allows for in situ physical crosslinking of pure aqueous silk fibroin into arbitrary geometries produced through freeform 3D printing. 3D bioprinting has emerged as a technology that can produce biologically relevant structures in defined geometries with microscale resolution. Techniques for fabrication of free-standing structures by printing into granular gel media has been demonstrated previously, however, these methods require crosslinking agents and post-processing steps on printed structures. Our method utilizes one-step gelation of silk fibroin within a suspension of synthetic nanoclay (Laponite), with no need for additional crosslinking compounds or post processing of the material. This new method allows for in situ physical crosslinking of pure aqueous silk fibroin into defined geometries produced through freeform 3D printing. Copyright © 2018 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Planning and setting objectives in field studies: Chapter 2
Fisher, Robert N.; Dodd, C. Kenneth
2016-01-01
This chapter enumerates the steps required in designing and planning field studies on the ecology and conservation of reptiles, as these involve a high level of uncertainty and risk. To this end, the chapter differentiates between goals (descriptions of what one intends to accomplish) and objectives (the measurable steps required to achieve the established goals). Thus, meeting a specific goal may require many objectives. It may not be possible to define some of them until certain experiments have been conducted; often evaluations of sampling protocols are needed to increase certainty in the biological results. And if sampling locations are fixed and sampling events are repeated over time, then both study-specific covariates and sampling-specific covariates should exist. Additionally, other critical design considerations for field study include obtaining permits, as well as researching ethics and biosecurity issues.
Distributed parameter modeling of repeated truss structures
NASA Technical Reports Server (NTRS)
Wang, Han-Ching
1994-01-01
A new approach to find homogeneous models for beam-like repeated flexible structures is proposed which conceptually involves two steps. The first step involves the approximation of 3-D non-homogeneous model by a 1-D periodic beam model. The structure is modeled as a 3-D non-homogeneous continuum. The displacement field is approximated by Taylor series expansion. Then, the cross sectional mass and stiffness matrices are obtained by energy equivalence using their additive properties. Due to the repeated nature of the flexible bodies, the mass, and stiffness matrices are also periodic. This procedure is systematic and requires less dynamics detail. The first step involves the homogenization from a 1-D periodic beam model to a 1-D homogeneous beam model. The periodic beam model is homogenized into an equivalent homogeneous beam model using the additive property of compliance along the generic axis. The major departure from previous approaches in literature is using compliance instead of stiffness in homogenization. An obvious justification is that the stiffness is additive at each cross section but not along the generic axis. The homogenized model preserves many properties of the original periodic model.
Although early Life Cycle Assessment (LCA) methodology researchers focused on the modeling of impacts from chemical emissions, it has become obvious that resource depletion categories such as land use, water use, and fossil fuel depletion require additional attention to appropria...
Song, Hun-Suk; Jeon, Jong-Min; Choi, Yong Keun; Kim, Jun-Young; Kim, Wooseong; Yoon, Jeong-Jun; Park, Kyungmoon; Ahn, Jungoh; Lee, Hongweon; Yang, Yung-Hun
2017-12-28
Lignocellulose is now a promising raw material for biofuel production. However, the lignin complex and crystalline cellulose require pretreatment steps for breakdown of the crystalline structure of cellulose for the generation of fermentable sugars. Moreover, several fermentation inhibitors are generated with sugar compounds, majorly furfural. The mitigation of these inhibitors is required for the further fermentation steps to proceed. Amino acids were investigated on furfural-induced growth inhibition in E. coli producing isobutanol. Glycine and serine were the most effective compounds against furfural. In minimal media, glycine conferred tolerance against furfural. From the IC₅₀ value for inhibitors in the production media, only glycine could alleviate growth arrest for furfural, where 6 mM glycine addition led to a slight increase in growth rate and isobutanol production from 2.6 to 2.8 g/l under furfural stress. Overexpression of glycine pathway genes did not lead to alleviation. However, addition of glycine to engineered strains blocked the growth arrest and increased the isobutanol production about 2.3-fold.
JPRS Report Science & Technology, Europe
1991-10-31
the solar system, the earth, and the conditions for life on earth, • To contribute to the solution of environmental prob- lems through satellite...requiring considerable additional R&D is to be stepped up. • Wind plants require about 10 years’ more R&D work. • Photovoltaics (PV) and solar ...Funding for active and passive solar energy exploita- tion. 5. Transport Sector • Optimizing means of transport (in manufacture and operation
Human Capital: Further Actions Needed to Enhance DOD’s Civilian Strategic Workforce Plan
2010-09-27
requirement to identify any incentives needed to attract and retain qualified senior leaders— including offering benefits to senior leaders that are...comparable to the benefits provided to general officers. Additionally, DOD’s workforce plan addresses the requirement to identify steps that the...including compensation and benefit enhancements, such as restoration of locality pay and guaranteed cost of living increases, which are necessary
Spatial Data Integration Using Ontology-Based Approach
NASA Astrophysics Data System (ADS)
Hasani, S.; Sadeghi-Niaraki, A.; Jelokhani-Niaraki, M.
2015-12-01
In today's world, the necessity for spatial data for various organizations is becoming so crucial that many of these organizations have begun to produce spatial data for that purpose. In some circumstances, the need to obtain real time integrated data requires sustainable mechanism to process real-time integration. Case in point, the disater management situations that requires obtaining real time data from various sources of information. One of the problematic challenges in the mentioned situation is the high degree of heterogeneity between different organizations data. To solve this issue, we introduce an ontology-based method to provide sharing and integration capabilities for the existing databases. In addition to resolving semantic heterogeneity, better access to information is also provided by our proposed method. Our approach is consisted of three steps, the first step is identification of the object in a relational database, then the semantic relationships between them are modelled and subsequently, the ontology of each database is created. In a second step, the relative ontology will be inserted into the database and the relationship of each class of ontology will be inserted into the new created column in database tables. Last step is consisted of a platform based on service-oriented architecture, which allows integration of data. This is done by using the concept of ontology mapping. The proposed approach, in addition to being fast and low cost, makes the process of data integration easy and the data remains unchanged and thus takes advantage of the legacy application provided.
Rep. Young, Don [R-AK-At Large
2014-07-31
House - 08/19/2014 Referred to the Subcommittee on Fisheries, Wildlife, Oceans, and Insular Affairs. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
NASA Technical Reports Server (NTRS)
2007-01-01
This cover sheet is for version 2 of the weather requirements document along with Appendix A. The purpose of the requirements document was to identify and to list the weather functional requirements needed to achieve the Access 5 vision of "operating High Altitude, Long Endurance (HALE) Unmanned Aircraft Systems (UAS) routinely, safely, and reliably in the National Airspace System (NAS) for Step 1." A discussion of the Federal Aviation Administration (FAA) references and related policies, procedures, and standards is provided as basis for the recommendations supported within this document. Additional procedures and reference documentation related to weather functional requirements is also provided for background. The functional requirements and related information are to be proposed to the FAA and various standards organizations for consideration and approval. The appendix was designed to show that sources of flight weather information are readily available to UAS pilots conducting missions in the NAS. All weather information for this presentation was obtained from the public internet.
Gruber, Joshua S; Arnold, Benjamin F; Reygadas, Fermin; Hubbard, Alan E; Colford, John M
2014-05-01
Complier average causal effects (CACE) estimate the impact of an intervention among treatment compliers in randomized trials. Methods used to estimate CACE have been outlined for parallel-arm trials (e.g., using an instrumental variables (IV) estimator) but not for other randomized study designs. Here, we propose a method for estimating CACE in randomized stepped wedge trials, where experimental units cross over from control conditions to intervention conditions in a randomized sequence. We illustrate the approach with a cluster-randomized drinking water trial conducted in rural Mexico from 2009 to 2011. Additionally, we evaluated the plausibility of assumptions required to estimate CACE using the IV approach, which are testable in stepped wedge trials but not in parallel-arm trials. We observed small increases in the magnitude of CACE risk differences compared with intention-to-treat estimates for drinking water contamination (risk difference (RD) = -22% (95% confidence interval (CI): -33, -11) vs. RD = -19% (95% CI: -26, -12)) and diarrhea (RD = -0.8% (95% CI: -2.1, 0.4) vs. RD = -0.1% (95% CI: -1.1, 0.9)). Assumptions required for IV analysis were probably violated. Stepped wedge trials allow investigators to estimate CACE with an approach that avoids the stronger assumptions required for CACE estimation in parallel-arm trials. Inclusion of CACE estimates in stepped wedge trials with imperfect compliance could enhance reporting and interpretation of the results of such trials.
Recent NIMH Clinical Trials and Implications for Practice
ERIC Educational Resources Information Center
Vitiello, Benedetto; Kratochvil, Christopher J.
2008-01-01
Optimal treatment of adolescent depression requires the use of antidepressants such as fluoxetine, and the addition of cognitive-behavioral therapy (CBT) offers better potential. Second-step pharmacological treatment of the disorder offers a success rate of around 50%. Clinical trial for the use of sertraline and CBT in treating…
Lencioni, Tiziana; Piscosquito, Giuseppe; Rabuffetti, Marco; Sipio, Enrica Di; Diverio, Manuela; Moroni, Isabella; Padua, Luca; Pagliano, Emanuela; Schenone, Angelo; Pareyson, Davide; Ferrarin, Maurizio
2018-05-01
Charcot-Marie-Tooth (CMT) is a slowly progressive disease characterized by muscular weakness and wasting with a length-dependent pattern. Mildly affected CMT subjects showed slight alteration of walking compared to healthy subjects (HS). To investigate the biomechanics of step negotiation, a task that requires greater muscle strength and balance control compared to level walking, in CMT subjects without primary locomotor deficits (foot drop and push off deficit) during walking. We collected data (kinematic, kinetic, and surface electromyographic) during walking on level ground and step negotiation, from 98 CMT subjects with mild-to-moderate impairment. Twenty-one CMT subjects (CMT-NLW, normal-like-walkers) were selected for analysis, as they showed values of normalized ROM during swing and produced work at push-off at ankle joint comparable to those of 31 HS. Step negotiation tasks consisted in climbing and descending a two-step stair. Only the first step provided the ground reaction force data. To assess muscle activity, each EMG profile was integrated over 100% of task duration and the activation percentage was computed in four phases that constitute the step negotiation tasks. In both tasks, CMT-NLW showed distal muscle hypoactivation. In addition, during step-ascending CMT-NLW subjects had relevant lower activities of vastus medialis and rectus femoris than HS in weight-acceptance, and, on the opposite, a greater activation as compared to HS in forward-continuance. During step-descending, CMT-NLW showed a reduced activity of tibialis anterior during controlled-lowering phase. Step negotiation revealed adaptive motor strategies related to muscle weakness due to disease in CMT subjects without any clinically apparent locomotor deficit during level walking. In addition, this study provided results useful for tailored rehabilitation of CMT patients. Copyright © 2018 Elsevier B.V. All rights reserved.
Continuous Energy Improvement in Motor Driven Systems - A Guidebook for Industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert A. McCoy and John G. Douglass
2014-02-01
This guidebook provides a step-by-step approach to developing a motor system energy-improvement action plan. An action plan includes which motors should be repaired or replaced with higher efficiency models, recommendations on maintaining a spares inventory, and discussion of improvements in maintenance practices. The guidebook is the successor to DOE’s 1997 Energy Management for Motor Driven Systems. It builds on its predecessor publication by including topics such as power transmission systems and matching driven equipment to process requirements in addition to motors.
2015-09-01
be strengthened in both areas. • DOD has a decentralized structure to administer and oversee its existing, required compliance -based ethics program...and attributes. “Ethics” relates to DOD’s required rules-based program, which ensures compliance with standards of conduct. 2 The White House...ethical content in professional military education , developing 13 character development initiatives for general and flag officers, and establishing
Human milk inactivates pathogens individually, additively, and synergistically.
Isaacs, Charles E
2005-05-01
Breast-feeding can reduce the incidence and the severity of gastrointestinal and respiratory infections in the suckling neonate by providing additional protective factors to the infant's mucosal surfaces. Human milk provides protection against a broad array of infectious agents through redundancy. Protective factors in milk can target multiple early steps in pathogen replication and target each step with more than one antimicrobial compound. The antimicrobial activity in human milk results from protective factors working not only individually but also additively and synergistically. Lipid-dependent antimicrobial activity in milk results from the additive activity of all antimicrobial lipids and not necessarily the concentration of one particular lipid. Antimicrobial milk lipids and peptides can work synergistically to decrease both the concentrations of individual compounds required for protection and, as importantly, greatly reduce the time needed for pathogen inactivation. The more rapidly pathogens are inactivated the less likely they are to establish an infection. The total antimicrobial protection provided by human milk appears to be far more than can be elucidated by examining protective factors individually.
Tan, Swee Jin; Phan, Huan; Gerry, Benjamin Michael; Kuhn, Alexandre; Hong, Lewis Zuocheng; Min Ong, Yao; Poon, Polly Suk Yean; Unger, Marc Alexander; Jones, Robert C; Quake, Stephen R; Burkholder, William F
2013-01-01
Library preparation for next-generation DNA sequencing (NGS) remains a key bottleneck in the sequencing process which can be relieved through improved automation and miniaturization. We describe a microfluidic device for automating laboratory protocols that require one or more column chromatography steps and demonstrate its utility for preparing Next Generation sequencing libraries for the Illumina and Ion Torrent platforms. Sixteen different libraries can be generated simultaneously with significantly reduced reagent cost and hands-on time compared to manual library preparation. Using an appropriate column matrix and buffers, size selection can be performed on-chip following end-repair, dA tailing, and linker ligation, so that the libraries eluted from the chip are ready for sequencing. The core architecture of the device ensures uniform, reproducible column packing without user supervision and accommodates multiple routine protocol steps in any sequence, such as reagent mixing and incubation; column packing, loading, washing, elution, and regeneration; capture of eluted material for use as a substrate in a later step of the protocol; and removal of one column matrix so that two or more column matrices with different functional properties can be used in the same protocol. The microfluidic device is mounted on a plastic carrier so that reagents and products can be aliquoted and recovered using standard pipettors and liquid handling robots. The carrier-mounted device is operated using a benchtop controller that seals and operates the device with programmable temperature control, eliminating any requirement for the user to manually attach tubing or connectors. In addition to NGS library preparation, the device and controller are suitable for automating other time-consuming and error-prone laboratory protocols requiring column chromatography steps, such as chromatin immunoprecipitation.
Tan, Swee Jin; Phan, Huan; Gerry, Benjamin Michael; Kuhn, Alexandre; Hong, Lewis Zuocheng; Min Ong, Yao; Poon, Polly Suk Yean; Unger, Marc Alexander; Jones, Robert C.; Quake, Stephen R.; Burkholder, William F.
2013-01-01
Library preparation for next-generation DNA sequencing (NGS) remains a key bottleneck in the sequencing process which can be relieved through improved automation and miniaturization. We describe a microfluidic device for automating laboratory protocols that require one or more column chromatography steps and demonstrate its utility for preparing Next Generation sequencing libraries for the Illumina and Ion Torrent platforms. Sixteen different libraries can be generated simultaneously with significantly reduced reagent cost and hands-on time compared to manual library preparation. Using an appropriate column matrix and buffers, size selection can be performed on-chip following end-repair, dA tailing, and linker ligation, so that the libraries eluted from the chip are ready for sequencing. The core architecture of the device ensures uniform, reproducible column packing without user supervision and accommodates multiple routine protocol steps in any sequence, such as reagent mixing and incubation; column packing, loading, washing, elution, and regeneration; capture of eluted material for use as a substrate in a later step of the protocol; and removal of one column matrix so that two or more column matrices with different functional properties can be used in the same protocol. The microfluidic device is mounted on a plastic carrier so that reagents and products can be aliquoted and recovered using standard pipettors and liquid handling robots. The carrier-mounted device is operated using a benchtop controller that seals and operates the device with programmable temperature control, eliminating any requirement for the user to manually attach tubing or connectors. In addition to NGS library preparation, the device and controller are suitable for automating other time-consuming and error-prone laboratory protocols requiring column chromatography steps, such as chromatin immunoprecipitation. PMID:23894273
Rational reduction of periodic propagators for off-period observations.
Blanton, Wyndham B; Logan, John W; Pines, Alexander
2004-02-01
Many common solid-state nuclear magnetic resonance problems take advantage of the periodicity of the underlying Hamiltonian to simplify the computation of an observation. Most of the time-domain methods used, however, require the time step between observations to be some integer or reciprocal-integer multiple of the period, thereby restricting the observation bandwidth. Calculations of off-period observations are usually reduced to brute force direct methods resulting in many demanding matrix multiplications. For large spin systems, the matrix multiplication becomes the limiting step. A simple method that can dramatically reduce the number of matrix multiplications required to calculate the time evolution when the observation time step is some rational fraction of the period of the Hamiltonian is presented. The algorithm implements two different optimization routines. One uses pattern matching and additional memory storage, while the other recursively generates the propagators via time shifting. The net result is a significant speed improvement for some types of time-domain calculations.
Linear-scaling generation of potential energy surfaces using a double incremental expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
König, Carolin, E-mail: carolink@kth.se; Christiansen, Ove, E-mail: ove@chem.au.dk
We present a combination of the incremental expansion of potential energy surfaces (PESs), known as n-mode expansion, with the incremental evaluation of the electronic energy in a many-body approach. The application of semi-local coordinates in this context allows the generation of PESs in a very cost-efficient way. For this, we employ the recently introduced flexible adaptation of local coordinates of nuclei (FALCON) coordinates. By introducing an additional transformation step, concerning only a fraction of the vibrational degrees of freedom, we can achieve linear scaling of the accumulated cost of the single point calculations required in the PES generation. Numerical examplesmore » of these double incremental approaches for oligo-phenyl examples show fast convergence with respect to the maximum number of simultaneously treated fragments and only a modest error introduced by the additional transformation step. The approach, presented here, represents a major step towards the applicability of vibrational wave function methods to sizable, covalently bound systems.« less
Specific test and evaluation plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hays, W.H.
1998-03-20
The purpose of this Specific Test and Evaluation Plan (STEP) is to provide a detailed written plan for the systematic testing of modifications made to the 241-AX-B Valve Pit by the W-314 Project. The STEP develops the outline for test procedures that verify the system`s performance to the established Project design criteria. The STEP is a lower tier document based on the W-314 Test and Evaluation Plan (TEP). Testing includes Validations and Verifications (e.g., Commercial Grade Item Dedication activities), Factory Acceptance Tests (FATs), installation tests and inspections, Construction Acceptance Tests (CATs), Acceptance Test Procedures (ATPs), Pre-Operational Test Procedures (POTPs), andmore » Operational Test Procedures (OTPs). It should be noted that POTPs are not required for testing of the transfer line addition. The STEP will be utilized in conjunction with the TEP for verification and validation.« less
Step 1: Human System Interface (HSI) Functional Requirements Document (FRD). Version 2
NASA Technical Reports Server (NTRS)
2006-01-01
This Functional Requirements Document (FRD) establishes a minimum set of Human System Interface (HSI) functional requirements to achieve the Access 5 Vision of "operating High Altitude, Long Endurance (HALE) Unmanned Aircraft Systems (UAS) routinely, safely, and reliably in the National Airspace System (NAS)". Basically, it provides what functions are necessary to fly UAS in the NAS. The framework used to identify the appropriate functions was the "Aviate, Navigate, Communicate, and Avoid Hazards" structure identified in the Access 5 FRD. As a result, fifteen high-level functional requirements were developed. In addition, several of them have been decomposed into low-level functional requirements to provide more detail.
Measure Guideline: Buried and/or Encapsulated Ducts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shapiro, C.; Zoeller, W.; Mantha, P.
2013-08-01
Buried and/or encapsulated ducts (BEDs) are a class of advanced, energy-efficiency strategies intended to address the significant ductwork thermal losses associated with ducts installed in unconditioned attics. BEDs are ducts installed in unconditioned attics that are covered in loose-fill insulation and/or encapsulated in closed cell polyurethane spray foam insulation. This Measure Guideline covers the technical aspects of BEDs as well as the advantages, disadvantages, and risks of BEDs compared to other alternative strategies. This guideline also provides detailed guidance on installation of BEDs strategies in new and existing homes through step-by-step installation procedures. Some of the procedures presented here, however,more » require specialized equipment or expertise. In addition, some alterations to duct systems may require a specialized license.« less
Pilot Plant Testing of Hot Gas Building Decontamination Process
1987-10-30
last hours of the cooldown (after water traps in the line were installed) showed no detectable contamination from this station. 1 60 CwC -So 0) 0 o j...Since we will not require refrigeration, additional generators probably 0 qlill not be required. Water is trucked to the site. Agent contaminated water ...surface. The gauze was handled by forceps during all of the sampling steps to prevent contamination after the solvent extraction clean-up of the gauze pads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melius, C
2007-12-05
The epidemiological and economic modeling of poultry diseases requires knowing the size, location, and operational type of each poultry type operation within the US. At the present time, the only national database of poultry operations that is available to the general public is the USDA's 2002 Agricultural Census data, published by the National Agricultural Statistics Service, herein referred to as the 'NASS data'. The NASS data provides census data at the county level on poultry operations for various operation types (i.e., layers, broilers, turkeys, ducks, geese). However, the number of farms and sizes of farms for the various types aremore » not independent since some facilities have more than one type of operation. Furthermore, some data on the number of birds represents the number sold, which does not represent the number of birds present at any given time. In addition, any data tabulated by NASS that could identify numbers of birds or other data reported by an individual respondent is suppressed by NASS and coded with a 'D'. To be useful for epidemiological and economic modeling, the NASS data must be converted into a unique set of facility types (farms having similar operational characteristics). The unique set must not double count facilities or birds. At the same time, it must account for all the birds, including those for which the data has been suppressed. Therefore, several data processing steps are required to work back from the published NASS data to obtain a consistent database for individual poultry operations. This technical report documents data processing steps that were used to convert the NASS data into a national poultry facility database with twenty-six facility types (7 egg-laying, 6 broiler, 1 backyard, 3 turkey, and 9 others, representing ducks, geese, ostriches, emus, pigeons, pheasants, quail, game fowl breeders and 'other'). The process involves two major steps. The first step defines the rules used to estimate the data that is suppressed within the NASS database. The first step is similar to the first step used to estimate suppressed data for livestock [Melius et al (2006)]. The second step converts the NASS poultry types into the operational facility types used by the epidemiological and economic model. We also define two additional facility types for high and low risk poultry backyards, and an additional two facility types for live bird markets and swap meets. The distribution of these additional facility types among counties is based on US population census data. The algorithm defining the number of premises and the corresponding distribution among counties and the resulting premises density plots for the continental US are provided.« less
Optimization of Interior Permanent Magnet Motor by Quality Engineering and Multivariate Analysis
NASA Astrophysics Data System (ADS)
Okada, Yukihiro; Kawase, Yoshihiro
This paper has described the method of optimization based on the finite element method. The quality engineering and the multivariable analysis are used as the optimization technique. This optimizing method consists of two steps. At Step.1, the influence of parameters for output is obtained quantitatively, at Step.2, the number of calculation by the FEM can be cut down. That is, the optimal combination of the design parameters, which satisfies the required characteristic, can be searched for efficiently. In addition, this method is applied to a design of IPM motor to reduce the torque ripple. The final shape can maintain average torque and cut down the torque ripple 65%. Furthermore, the amount of permanent magnets can be reduced.
Parental Perceptions and Recommendations of Computing Majors: A Technology Acceptance Model Approach
ERIC Educational Resources Information Center
Powell, Loreen; Wimmer, Hayden
2017-01-01
Currently, there are more technology related jobs then there are graduates in supply. The need to understand user acceptance of computing degrees is the first step in increasing enrollment in computing fields. Additionally, valid measurement scales for predicting user acceptance of Information Technology degree programs are required. The majority…
Exploring Teacher Beliefs in Teaching EAP at Low Proficiency Levels
ERIC Educational Resources Information Center
Alexander, Olwyn
2012-01-01
Teaching English for Academic Purposes (EAP) requires teachers experienced in Communicative Language Teaching (CLT) to acquire additional skills, abilities and approaches. Beliefs about CLT teaching may not be appropriate for teaching EAP, especially to low level learners. Making teachers aware of their beliefs is the first step in helping them to…
Farrugia, Kevin J; Deacon, Paul; Fraser, Joanna
2014-03-01
There are a number of studies discussing recent developments of a one-step fluorescent cyanoacrylate process. This study is a pseudo operational trial to compare an example of a one-step fluorescent cyanoacrylate product, Lumicyano™, with the two recommended techniques for plastic carrier bags; cyanoacrylate fuming followed by basic yellow 40 (BY40) dyeing and powder suspensions. 100 plastic carrier bags were collected from the place of work and the items were treated as found without any additional fingermark deposition. The bags were split into three and after treatment with the three techniques a comparable number of fingermarks were detected by each technique (average of 300 fingermarks). The items treated with Lumicyano™ were sequentially processed with BY40 and an additional 43 new fingermarks were detected. Lumicyano™ appears to be a suitable technique for the development of fingermarks on plastic carrier bags and it can help save lab space and time as it does not require dyeing or drying procedures. Furthermore, contrary to other one-step cyanoacrylate products, existing cyanoacrylate cabinets do not require any modification for the treatment of articles with Lumicyano™. To date, there is little peer reviewed articles in the literature on trials related to Lumicyano™ and this study aims to contribute to fill this gap. © 2013.
Application of Additive Manufacturing in Oral and Maxillofacial Surgery.
Farré-Guasch, Elisabet; Wolff, Jan; Helder, Marco N; Schulten, Engelbert A J M; Forouzanfar, Tim; Klein-Nulend, Jenneke
2015-12-01
Additive manufacturing is the process of joining materials to create objects from digital 3-dimensional (3D) model data, which is a promising technology in oral and maxillofacial surgery. The management of lost craniofacial tissues owing to congenital abnormalities, trauma, or cancer treatment poses a challenge to oral and maxillofacial surgeons. Many strategies have been proposed for the management of such defects, but autogenous bone grafts remain the gold standard for reconstructive bone surgery. Nevertheless, cell-based treatments using adipose stem cells combined with osteoconductive biomaterials or scaffolds have become a promising alternative to autogenous bone grafts. Such treatment protocols often require customized 3D scaffolds that fulfill functional and esthetic requirements, provide adequate blood supply, and meet the load-bearing requirements of the head. Currently, such customized 3D scaffolds are being manufactured using additive manufacturing technology. In this review, 2 of the current and emerging modalities for reconstruction of oral and maxillofacial bone defects are highlighted and discussed, namely human maxillary sinus floor elevation as a valid model to test bone tissue-engineering approaches enabling the application of 1-step surgical procedures and seeding of Good Manufacturing Practice-level adipose stem cells on computer-aided manufactured scaffolds to reconstruct large bone defects in a 2-step surgical procedure, in which cells are expanded ex vivo and seeded on resorbable scaffolds before implantation. Furthermore, imaging-guided tissue-engineering technologies to predetermine the surgical location and to facilitate the manufacturing of custom-made implants that meet the specific patient's demands are discussed. The potential of tissue-engineered constructs designed for the repair of large oral and maxillofacial bone defects in load-bearing situations in a 1-step surgical procedure combining these 2 innovative approaches is particularly emphasized. Copyright © 2015 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Process Development for Automated Solar Cell and Module Production. Task 4: Automated Array Assembly
NASA Technical Reports Server (NTRS)
1979-01-01
A baseline sequence for the manufacture of solar cell modules was specified. Starting with silicon wafers, the process goes through damage etching, texture etching, junction formation, plasma edge etch, aluminum back surface field formation, and screen printed metallization to produce finished solar cells. The cells were then series connected on a ribbon and bonded into a finished glass tedlar module. A number of steps required additional developmental effort to verify technical and economic feasibility. These steps include texture etching, plasma edge etch, aluminum back surface field formation, array layup and interconnect, and module edge sealing and framing.
NASA Astrophysics Data System (ADS)
Cherri, Abdallah K.; Alam, Mohammed S.
1998-07-01
Highly-efficient two-step recoded and one-step nonrecoded trinary signed-digit (TSD) carry-free adders subtracters are presented on the basis of redundant-bit representation for the operands digits. It has been shown that only 24 (30) minterms are needed to implement the two-step recoded (the one-step nonrecoded) TSD addition for any operand length. Optical implementation of the proposed arithmetic can be carried out by use of correlation- or matrix-multiplication-based schemes, saving 50% of the system memory. Furthermore, we present four different multiplication designs based on our proposed recoded and nonrecoded TSD adders. Our multiplication designs require a small number of reduced minterms to generate the multiplication partial products. Finally, a recently proposed pipelined iterative-tree algorithm can be used in the TSD adders multipliers; consequently, efficient use of all available adders can be made.
Cherri, A K; Alam, M S
1998-07-10
Highly-efficient two-step recoded and one-step nonrecoded trinary signed-digit (TSD) carry-free adders-subtracters are presented on the basis of redundant-bit representation for the operands' digits. It has been shown that only 24 (30) minterms are needed to implement the two-step recoded (the one-step nonrecoded) TSD addition for any operand length. Optical implementation of the proposed arithmetic can be carried out by use of correlation- or matrix-multiplication-based schemes, saving 50% of the system memory. Furthermore, we present four different multiplication designs based on our proposed recoded and nonrecoded TSD adders. Our multiplication designs require a small number of reduced minterms to generate the multiplication partial products. Finally, a recently proposed pipelined iterative-tree algorithm can be used in the TSD adders-multipliers; consequently, efficient use of all available adders can be made.
Saito, Maiko; Kurosawa, Yae; Okuyama, Tsuneo
2012-02-01
Antibody purification using proteins A and G has been a standard method for research and industrial processes. The conventional method, however, includes a three-step process, including buffer exchange, before chromatography. In addition, proteins A and G require low pH elution, which causes antibody aggregation and inactivates the antibody's immunity. This report proposes a two-step method using hydroxyapatite chromatography and membrane filtration, without proteins A and G. This novel method shortens the running time to one-third the conventional method for each cycle. Using our two-step method, 90.2% of the monoclonal antibodies purified were recovered in the elution fraction, the purity achieved was >90%, and most of the antigen-specific activity was retained. This report suggests that the two-step method using hydroxyapatite chromatography and membrane filtration should be considered as an alternative to purification using proteins A and G.
5 CFR 531.504 - Level of performance required for quality step increase.
Code of Federal Regulations, 2010 CFR
2010-01-01
... step increase. 531.504 Section 531.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER THE GENERAL SCHEDULE Quality Step Increases § 531.504 Level of performance required for quality step increase. A quality step increase shall not be required but may be granted only...
Ötes, Ozan; Flato, Hendrik; Winderl, Johannes; Hubbuch, Jürgen; Capito, Florian
2017-10-10
The protein A capture step is the main cost-driver in downstream processing, with high attrition costs especially when using protein A resin not until end of resin lifetime. Here we describe a feasibility study, transferring a batch downstream process to a hybrid process, aimed at replacing batch protein A capture chromatography with a continuous capture step, while leaving the polishing steps unchanged to minimize required process adaptations compared to a batch process. 35g of antibody were purified using the hybrid approach, resulting in comparable product quality and step yield compared to the batch process. Productivity for the protein A step could be increased up to 420%, reducing buffer amounts by 30-40% and showing robustness for at least 48h continuous run time. Additionally, to enable its potential application in a clinical trial manufacturing environment cost of goods were compared for the protein A step between hybrid process and batch process, showing a 300% cost reduction, depending on processed volumes and batch cycles. Copyright © 2017 Elsevier B.V. All rights reserved.
49 CFR 231.2 - Hopper cars and high-side gondolas with fixed ends.
Code of Federal Regulations, 2011 CFR
2011-10-01
... car, except buffer block, brake shaft, brake wheel, brake step, or uncoupling lever shall extend to... knuckle when closed with coupler horn against the buffer block or end sill, and no other part of end of... outer face of buffer block. (2) Carriers are not required to make changes to secure additional end...
Rep. Quigley, Mike [D-IL-5
2009-06-18
House - 07/23/2009 Referred to the Subcommittee on Immigration, Citizenship, Refugees, Border Security, and International Law. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
State Standard-Setting Processes in Brief. State Academic Standards: Standard-Setting Processes
ERIC Educational Resources Information Center
Thomsen, Jennifer
2014-01-01
Concerns about academic standards, whether created by states from scratch or adopted by states under the Common Core State Standards (CCSS) banner, have drawn widespread media attention and are at the top of many state policymakers' priority lists. Recently, a number of legislatures have required additional steps, such as waiting periods for…
Hemann, Brian A; Durning, Steven J; Kelly, William F; Dong, Ting; Pangaro, Louis N; Hemmer, Paul A
2015-04-01
To determine how students who are referred to a competency committee for concern over performance, and ultimately judged not to require remediation, perform during internship. Uniformed Services University of the Health Sciences' students who graduated between 2007 and 2011 were included in this study. We compared the performance during internship of three groups: students who were referred to the internal medicine competency committee for review who met passing criterion, students who were reviewed by the internal medicine competency committee who were determined not to have passed the clerkship and were prescribed remediation, and students who were never reviewed by this competency committee. Program Director survey results and United States Medical Licensing Examination (USMLE) Step 3 examination results were used as the outcomes of interest. The overall survey response rate for this 5-year cohort was 81% (689/853). 102 students were referred to this competency committee for review. 63/102 students were reviewed by this competency committee, given passing grades in the internal medicine clerkship, and were not required to do additional remediation. 39/102 students were given less than passing grades by this competency committee and required to perform additional clinical work in the department of medicine to remediate their performance. 751 students were never presented to this competency committee. Compared to students who were never presented for review, the group of reviewed students who did not require remediation was 5.6 times more likely to receive low internship survey ratings in the realm of professionalism, 8.6 times more likely to receive low ratings in the domain of medical expertise, and had a higher rate of USMLE Step 3 failure (9.4% vs. 2.8%). When comparing the reviewed group to students who were reviewed and also required remediation, the only significant difference between groups regarding professionalism ratings with 50% of the group requiring remediation garnering low ratings compared to 18% of the reviewed group. Students who are referred to a committee for review following completion of their internal medicine clerkship are more likely to receive poor ratings in internship and fail USMLE Step 3 compared to students whose performance in the medicine clerkship does not trigger a committee review. These findings provide validity evidence for our competency committee review in that the students identified as requiring further clinical work had significantly higher rates of poor ratings in professionalism than students who were reviewed by the competency committee but not required to remediate. Additionally, students reviewed but not required to remediate were nonetheless at risk of low internship ratings, suggesting that these students might need some intervention prior to graduation. Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.
77 FR 43481 - Taking Additional Steps to Address the National Emergency With Respect to Somalia
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-24
... Additional Steps to Address the National Emergency With Respect to Somalia #0; #0; #0; Presidential Documents... Additional Steps to Address the National Emergency With Respect to Somalia By the authority vested in me as... order to take additional steps to deal with the national emergency with respect to the situation in...
Qian, F; Li, G; Ruan, H; Jing, H; Liu, L
1999-09-10
A novel, to our knowledge, two-step digit-set-restricted modified signed-digit (MSD) addition-subtraction algorithm is proposed. With the introduction of the reference digits, the operand words are mapped into an intermediate carry word with all digits restricted to the set {1, 0} and an intermediate sum word with all digits restricted to the set {0, 1}, which can be summed to form the final result without carry generation. The operation can be performed in parallel by use of binary logic. An optical system that utilizes an electron-trapping device is suggested for accomplishing the required binary logic operations. By programming of the illumination of data arrays, any complex logic operations of multiple variables can be realized without additional temporal latency of the intermediate results. This technique has a high space-bandwidth product and signal-to-noise ratio. The main structure can be stacked to construct a compact optoelectronic MSD adder-subtracter.
NASA Astrophysics Data System (ADS)
Reich, Jason; Wang, Linlin; Johnson, Duane
2013-03-01
We detail the results of a Density Functional Theory (DFT) based study of hydrogen desorption, including thermodynamics and kinetics with(out) catalytic dopants, on stepped (110) rutile and nanocluster MgH2. We investigate competing configurations (optimal surface and nanoparticle configurations) using simulated annealing with additional converged results at 0 K, necessary for finding the low-energy, doped MgH2 nanostructures. Thermodynamics of hydrogen desorption from unique dopant sites will be shown, as well as activation energies using the Nudged Elastic Band algorithm. To compare to experiment, both stepped structures and nanoclusters are required to understanding and predict the effects of ball milling. We demonstrate how these model systems relate to the intermediary sized structures typically seen in ball milling experiments.
Pozzolanic filtration/solidification of radionuclides in nuclear reactor cooling water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Englehardt, J.D.; Peng, C.
1995-12-31
Laboratory studies to investigate the feasibility of one- and two-step processes for precipitation/coprecipitating radionuclides from nuclear reactor cooling water, filtering with pozzolanic filter aid, and solidifying, are reported in this paper. In the one-step process, ferrocyanide salt and excess lime are added ahead of the filter, and the resulting filter cake solidifies by a pozzolanic reaction. The two-step process involves addition of solidifying agents subsequent to filtration. It was found that high surface area diatomaceous synthetic calcium silicate powders, sold commercially as functional fillers and carriers, adsorb nickel isotopes from solution at neutral and slightly basic pH. Addition of themore » silicates to cooling water allowed removal of the tested metal isotopes (nickel, iron, manganese, cobalt, and cesium) simultaneously at neutral to slightly basic pH. Lime to diatomite ratio was the most influential characteristic of composition on final strength tested, with higher lime ratios giving higher strength. Diatomaceous earth filter aids manufactured without sodium fluxes exhibited higher pozzolanic activity. Pozzolanic filter cake solidified with sodium silicate and a ratio of 0.45 parts lime to 1 part diatomite had compressive strength ranging from 470 to 595 psi at a 90% confidence level. Leachability indices of all tested metals in the solidified waste were acceptable. In light of the typical requirement of removing iron and desirability of control over process pH, a two-step process involving addition of Portland cement to the filter cake may be most generally applicable.« less
The effect of a novel minimally invasive strategy for infected necrotizing pancreatitis.
Tong, Zhihui; Shen, Xiao; Ke, Lu; Li, Gang; Zhou, Jing; Pan, Yiyuan; Li, Baiqiang; Yang, Dongliang; Li, Weiqin; Li, Jieshou
2017-11-01
Step-up approach consisting of multiple minimally invasive techniques has gradually become the mainstream for managing infected pancreatic necrosis (IPN). In the present study, we aimed to compare the safety and efficacy of a novel four-step approach and the conventional approach in managing IPN. According to the treatment strategy, consecutive patients fulfilling the inclusion criteria were put into two time intervals to conduct a before-and-after comparison: the conventional group (2010-2011) and the novel four-step group (2012-2013). The conventional group was essentially open necrosectomy for any patient who failed percutaneous drainage of infected necrosis. And the novel drainage approach consisted of four different steps including percutaneous drainage, negative pressure irrigation, endoscopic necrosectomy and open necrosectomy in sequence. The primary endpoint was major complications (new-onset organ failure, sepsis or local complications, etc.). Secondary endpoints included mortality during hospitalization, need of emergency surgery, duration of organ failure and sepsis, etc. Of the 229 recruited patients, 92 were treated with the conventional approach and the remaining 137 were managed with the novel four-step approach. New-onset major complications occurred in 72 patients (78.3%) in the two-step group and 75 patients (54.7%) in the four-step group (p < 0.001). For other important endpoints, although there was no statistical difference in mortality between the two groups (p = 0.403), significantly fewer patients in the four-step group required emergency surgery when compared with the conventional group [14.6% (20/137) vs. 45.6% (42/92), p < 0.001]. In addition, stratified analysis revealed that the four-step approach group presented significantly lower incidence of new-onset organ failure and other major complications in patients with the most severe type of AP. Comparing with the conventional approach, the novel four-step approach significantly reduced the rate of new-onset major complications and requirement of emergency operations in treating IPN, especially in those with the most severe type of acute pancreatitis.
Biochemical analysis with microfluidic systems.
Bilitewski, Ursula; Genrich, Meike; Kadow, Sabine; Mersal, Gaber
2003-10-01
Microfluidic systems are capillary networks of varying complexity fabricated originally in silicon, but nowadays in glass and polymeric substrates. Flow of liquid is mainly controlled by use of electroosmotic effects, i.e. application of electric fields, in addition to pressurized flow, i.e. application of pressure or vacuum. Because electroosmotic flow rates depend on the charge densities on the walls of capillaries, they are influenced by substrate material, fabrication processes, surface pretreatment procedures, and buffer additives. Microfluidic systems combine the properties of capillary electrophoretic systems and flow-through analytical systems, and thus biochemical analytical assays have been developed utilizing and integrating both aspects. Proteins, peptides, and nucleic acids can be separated because of their different electrophoretic mobility; detection is achieved with fluorescence detectors. For protein analysis, in particular, interfaces between microfluidic chips and mass spectrometers were developed. Further levels of integration of required sample-treatment steps were achieved by integration of protein digestion by immobilized trypsin and amplification of nucleic acids by the polymerase chain reaction. Kinetic constants of enzyme reactions were determined by adjusting different degrees of dilution of enzyme substrates or inhibitors within a single chip utilizing mainly the properties of controlled dosing and mixing liquids within a chip. For analysis of kinase reactions, however, a combination of a reaction step (enzyme with substrate and inhibitor) and a separation step (enzyme substrate and reaction product) was required. Microfluidic chips also enable separation of analytes from sample matrix constituents, which can interfere with quantitative determination, if they have different electrophoretic mobilities. In addition to analysis of nucleic acids and enzymes, immunoassays are the third group of analytical assays performed in microfluidic chips. They utilize either affinity capillary electrophoresis as a homogeneous assay format, or immobilized antigens or antibodies in heterogeneous assays with serial supply of reagents and washing solutions.
A Geometric Analysis to Protect Manned Assets from Newly Launched Objects - Cola Gap Analysis
NASA Technical Reports Server (NTRS)
Hametz, Mark E.; Beaver, Brian A.
2013-01-01
A safety risk was identified for the International Space Station (ISS) by The Aerospace Corporation, where the ISS would be unable to react to a conjunction with a newly launched object following the end of the launch Collision Avoidance (COLA) process. Once an object is launched, there is a finite period of time required to track, catalog, and evaluate that new object as part of standard onorbit COLA screening processes. Additionally, should a conjunction be identified, there is an additional period of time required to plan and execute a collision avoidance maneuver. While the computed prelaunch probability of collision with any object is extremely low, NASA/JSC has requested that all US launches take additional steps to protect the ISS during this "COLA gap" period. This paper details a geometric-based COLA gap analysis method developed by the NASA Launch Services Program to determine if launch window cutouts are required to mitigate this risk. Additionally, this paper presents the results of several missions where this process has been used operationally.
USE OF THE SDO POINTING CONTROLLERS FOR INSTRUMENT CALIBRATION MANEUVERS
NASA Technical Reports Server (NTRS)
Vess, Melissa F.; Starin, Scott R.; Morgenstern, Wendy M.
2005-01-01
During the science phase of the Solar Dynamics Observatory mission, the three science instruments require periodic instrument calibration maneuvers with a frequency of up to once per month. The command sequences for these maneuvers vary in length from a handful of steps to over 200 steps, and individual steps vary in size from 5 arcsec per step to 22.5 degrees per step. Early in the calibration maneuver development, it was determined that the original attitude sensor complement could not meet the knowledge requirements for the instrument calibration maneuvers in the event of a sensor failure. Because the mission must be single fault tolerant, an attitude determination trade study was undertaken to determine the impact of adding an additional attitude sensor versus developing alternative, potentially complex, methods of performing the maneuvers in the event of a sensor failure. To limit the impact to the science data capture budget, these instrument calibration maneuvers must be performed as quickly as possible while maintaining the tight pointing and knowledge required to obtain valid data during the calibration. To this end, the decision was made to adapt a linear pointing controller by adjusting gains and adding an attitude limiter so that it would be able to slew quickly and still achieve steady pointing once on target. During the analysis of this controller, questions arose about the stability of the controller during slewing maneuvers due to the combination of the integral gain, attitude limit, and actuator saturation. Analysis was performed and a method for disabling the integral action while slewing was incorporated to ensure stability. A high fidelity simulation is used to simulate the various instrument calibration maneuvers.
Dual salt precipitation for the recovery of a recombinant protein from Escherichia coli.
Balasundaram, Bangaru; Sachdeva, Soam; Bracewell, Daniel G
2011-01-01
When considering worldwide demand for biopharmaceuticals, it becomes necessary to consider alternative process strategies to improve the economics of manufacturing such molecules. To address this issue, the current study investigates precipitation to selectively isolate the product or remove contaminants and thus assist the initial purification of a intracellular protein. The hypothesis tested was that the combination of two or more precipitating agents will alter the solubility profile of the product through synergistic or antagonistic effects. This principle was investigated through several combinations of ammonium sulfate and sodium citrate at different ratios. A synergistic effect mediated by a known electrostatic interaction of citrate ions with Fab' in addition to the typical salting-out effects was observed. On the basis of the results of the solubility studies, a two step primary recovery route was investigated. In the first step termed conditioning, post-homogenization and before clarification, addition of 0.8 M ammonium sulfate extracted 30% additional product. Clarification performance measured using a scale-down disc stack centrifugation mimic determined a four-fold reduction in centrifuge size requirements. Dual salt precipitation in the second step resulted in >98% recovery of Fab' while removing 36% of the contaminant proteins simultaneously. Copyright © 2011 American Institute of Chemical Engineers (AIChE).
QUICR-learning for Multi-Agent Coordination
NASA Technical Reports Server (NTRS)
Agogino, Adrian K.; Tumer, Kagan
2006-01-01
Coordinating multiple agents that need to perform a sequence of actions to maximize a system level reward requires solving two distinct credit assignment problems. First, credit must be assigned for an action taken at time step t that results in a reward at time step t > t. Second, credit must be assigned for the contribution of agent i to the overall system performance. The first credit assignment problem is typically addressed with temporal difference methods such as Q-learning. The second credit assignment problem is typically addressed by creating custom reward functions. To address both credit assignment problems simultaneously, we propose the "Q Updates with Immediate Counterfactual Rewards-learning" (QUICR-learning) designed to improve both the convergence properties and performance of Q-learning in large multi-agent problems. QUICR-learning is based on previous work on single-time-step counterfactual rewards described by the collectives framework. Results on a traffic congestion problem shows that QUICR-learning is significantly better than a Q-learner using collectives-based (single-time-step counterfactual) rewards. In addition QUICR-learning provides significant gains over conventional and local Q-learning. Additional results on a multi-agent grid-world problem show that the improvements due to QUICR-learning are not domain specific and can provide up to a ten fold increase in performance over existing methods.
40 CFR 141.133 - Compliance requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... specified by § 141.135(c). Systems may begin monitoring to determine whether Step 1 TOC removals can be met... the Step 1 requirements in § 141.135(b)(2) and must therefore apply for alternate minimum TOC removal (Step 2) requirements, is not eligible for retroactive approval of alternate minimum TOC removal (Step 2...
15 CFR 732.6 - Steps for other requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Steps for other requirements. 732.6...) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE EXPORT ADMINISTRATION REGULATIONS STEPS FOR USING THE EAR § 732.6 Steps for other requirements. Sections 732.1 through 732.4 of this part are useful in...
Rep. Stupak, Bart [D-MI-1
2009-01-13
House - 02/04/2009 Referred to the Subcommittee on National Parks, Forests and Public Lands. (All Actions) Notes: For further action, see H.R.146, which became Public Law 111-11 on 3/30/2009. Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Semantic-episodic interactions in the neuropsychology of disbelief.
Ladowsky-Brooks, Ricki; Alcock, James E
2007-03-01
The purpose of this paper is to outline ways in which characteristics of memory functioning determine truth judgements regarding verbally transmitted information. Findings on belief formation from several areas of psychology were reviewed in order to identify general principles that appear to underlie the designation of information in memory as "true" or "false". Studies on belief formation have demonstrated that individuals have a tendency to encode information as "true" and that an additional encoding step is required to tag information as "false". This additional step can involve acquisition and later recall of semantic-episodic associations between message content and contextual cues that signal that information is "false". Semantic-episodic interactions also appear to prevent new information from being accepted as "true" through encoding bias or the assignment of a "false" tag to data that is incompatible with prior knowledge. It is proposed that truth judgements are made through a combined weighting of the reliability of the information source and the compatibility of this information with already stored data. This requires interactions in memory. Failure to integrate different types of memories, such as semantic and episodic memories, can arise from mild hippocampal dysfunction and might result in delusions.
A collaborative approach to lean laboratory workstation design reduces wasted technologist travel.
Yerian, Lisa M; Seestadt, Joseph A; Gomez, Erron R; Marchant, Kandice K
2012-08-01
Lean methodologies have been applied in many industries to reduce waste. We applied Lean techniques to redesign laboratory workstations with the aim of reducing the number of times employees must leave their workstations to complete their tasks. At baseline in 68 workflows (aggregates or sequence of process steps) studied, 251 (38%) of 664 tasks required workers to walk away from their workstations. After analysis and redesign, only 59 (9%) of the 664 tasks required technologists to leave their workstations to complete these tasks. On average, 3.4 travel events were removed for each workstation. Time studies in a single laboratory section demonstrated that workers spend 8 to 70 seconds in travel each time they step away from the workstation. The redesigned workstations will allow employees to spend less time travelling around the laboratory. Additional benefits include employee training in waste identification, improved overall laboratory layout, and identification of other process improvement opportunities in our laboratory.
Cp2 TiX Complexes for Sustainable Catalysis in Single-Electron Steps.
Richrath, Ruben B; Olyschläger, Theresa; Hildebrandt, Sven; Enny, Daniel G; Fianu, Godfred D; Flowers, Robert A; Gansäuer, Andreas
2018-04-25
We present a combined electrochemical, kinetic, and synthetic study with a novel and easily accessible class of titanocene catalysts for catalysis in single-electron steps. The tailoring of the electronic properties of our Cp 2 TiX-catalysts that are prepared in situ from readily available Cp 2 TiX 2 is achieved by varying the anionic ligand X. Of the complexes investigated, Cp 2 TiOMs proved to be either equal or substantially superior to the best catalysts developed earlier. The kinetic and thermodynamic properties pertinent to catalysis have been determined. They allow a mechanistic understanding of the subtle interplay of properties required for an efficient oxidative addition and reduction. Therefore, our study highlights that efficient catalysts do not require the elaborate covalent modification of the cyclopentadienyl ligands. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mapping the ecological networks of microbial communities.
Xiao, Yandong; Angulo, Marco Tulio; Friedman, Jonathan; Waldor, Matthew K; Weiss, Scott T; Liu, Yang-Yu
2017-12-11
Mapping the ecological networks of microbial communities is a necessary step toward understanding their assembly rules and predicting their temporal behavior. However, existing methods require assuming a particular population dynamics model, which is not known a priori. Moreover, those methods require fitting longitudinal abundance data, which are often not informative enough for reliable inference. To overcome these limitations, here we develop a new method based on steady-state abundance data. Our method can infer the network topology and inter-taxa interaction types without assuming any particular population dynamics model. Additionally, when the population dynamics is assumed to follow the classic Generalized Lotka-Volterra model, our method can infer the inter-taxa interaction strengths and intrinsic growth rates. We systematically validate our method using simulated data, and then apply it to four experimental data sets. Our method represents a key step towards reliable modeling of complex, real-world microbial communities, such as the human gut microbiota.
Report of the In Situ Resources Utilization Workshop
NASA Technical Reports Server (NTRS)
Fairchild, Kyle (Editor); Mendell, Wendell W. (Editor)
1988-01-01
The results of a workshop of 50 representatives from the public and private sector which investigated the potential joint development of the key technologies and mechanisms that will enable the permanent habitation of space are presented. The workshop is an initial step to develop a joint public/private assessment of new technology requirements of future space options, to share knowledge on required technologies that may exist in the private sector, and to investigate potential joint technology development opportunities. The majority of the material was produced in 5 working groups: (1) Construction, Assembly, Automation and Robotics; (2) Prospecting, Mining, and Surface Transportation; (3) Biosystems and Life Support; (4) Materials Processing; and (5) Innovative Ventures. In addition to the results of the working groups, preliminary technology development recommendations to assist in near-term development priority decisions are presented. Finally, steps are outlined for potential new future activities and relationships among the public, private, and academic sectors.
Impact of modellers' decisions on hydrological a priori predictions
NASA Astrophysics Data System (ADS)
Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.
2013-07-01
The purpose of this paper is to stimulate a re-thinking of how we, the catchment hydrologists, could become reliable forecasters. A group of catchment modellers predicted the hydrological response of a man-made 6 ha catchment in its initial phase (Chicken Creek) without having access to the observed records. They used conceptually different model families. Their modelling experience differed largely. The prediction exercise was organized in three steps: (1) for the 1st prediction modellers received a basic data set describing the internal structure of the catchment (somewhat more complete than usually available to a priori predictions in ungauged catchments). They did not obtain time series of stream flow, soil moisture or groundwater response. (2) Before the 2nd improved prediction they inspected the catchment on-site and attended a workshop where the modellers presented and discussed their first attempts. (3) For their improved 3rd prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step 1. Here, we detail the modeller's decisions in accounting for the various processes based on what they learned during the field visit (step 2) and add the final outcome of step 3 when the modellers made use of additional data. We document the prediction progress as well as the learning process resulting from the availability of added information. For the 2nd and 3rd step, the progress in prediction quality could be evaluated in relation to individual modelling experience and costs of added information. We learned (i) that soft information such as the modeller's system understanding is as important as the model itself (hard information), (ii) that the sequence of modelling steps matters (field visit, interactions between differently experienced experts, choice of model, selection of available data, and methods for parameter guessing), and (iii) that added process understanding can be as efficient as adding data for improving parameters needed to satisfy model requirements.
Comparing the efficacy of metronome beeps and stepping stones to adjust gait: steps to follow!
Bank, Paulina J M; Roerdink, Melvyn; Peper, C E
2011-03-01
Acoustic metronomes and visual targets have been used in rehabilitation practice to improve pathological gait. In addition, they may be instrumental in evaluating and training instantaneous gait adjustments. The aim of this study was to compare the efficacy of two cue types in inducing gait adjustments, viz. acoustic temporal cues in the form of metronome beeps and visual spatial cues in the form of projected stepping stones. Twenty healthy elderly (aged 63.2 ± 3.6 years) were recruited to walk on an instrumented treadmill at preferred speed and cadence, paced by either metronome beeps or projected stepping stones. Gait adaptations were induced using two manipulations: by perturbing the sequence of cues and by imposing switches from one cueing type to the other. Responses to these manipulations were quantified in terms of step-length and step-time adjustments, the percentage correction achieved over subsequent steps, and the number of steps required to restore the relation between gait and the beeps or stepping stones. The results showed that perturbations in a sequence of stepping stones were overcome faster than those in a sequence of metronome beeps. In switching trials, switching from metronome beeps to stepping stones was achieved faster than vice versa, indicating that gait was influenced more strongly by the stepping stones than the metronome beeps. Together these results revealed that, in healthy elderly, the stepping stones induced gait adjustments more effectively than did the metronome beeps. Potential implications for the use of metronome beeps and stepping stones in gait rehabilitation practice are discussed.
NASA Technical Reports Server (NTRS)
Maryniak, Gregg E.
1992-01-01
Prior studies by NASA and the Space Studies Institute have looked at the infrastructure required for the construction of solar power satellites (SPS) and other valuable large space systems from lunar materials. This paper discusses the results of a Lunar Systems Workshop conducted in January 1988. The workshop identified components of the infrastructure that could be implemented in the near future to create a revenue stream. These revenues could then be used to 'bootstrap' the additional elements required to begin the commercial use of nonterrestrial materials.
49 CFR 399.207 - Truck and truck-tractor access requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... requirements: (1) Vertical height. All measurements of vertical height shall be made from ground level with the... vertical height of the first step shall be no more than 609 millimeters (24 inches) from ground level. (3... requirement. The step need not retain the disc at rest. (5) Step strength. Each step must withstand a vertical...
49 CFR 399.207 - Truck and truck-tractor access requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... requirements: (1) Vertical height. All measurements of vertical height shall be made from ground level with the... vertical height of the first step shall be no more than 609 millimeters (24 inches) from ground level. (3... requirement. The step need not retain the disc at rest. (5) Step strength. Each step must withstand a vertical...
49 CFR 399.207 - Truck and truck-tractor access requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... requirements: (1) Vertical height. All measurements of vertical height shall be made from ground level with the... vertical height of the first step shall be no more than 609 millimeters (24 inches) from ground level. (3... requirement. The step need not retain the disc at rest. (5) Step strength. Each step must withstand a vertical...
49 CFR 399.207 - Truck and truck-tractor access requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... requirements: (1) Vertical height. All measurements of vertical height shall be made from ground level with the... vertical height of the first step shall be no more than 609 millimeters (24 inches) from ground level. (3... requirement. The step need not retain the disc at rest. (5) Step strength. Each step must withstand a vertical...
Jagtap, Pratik; Goslinga, Jill; Kooren, Joel A; McGowan, Thomas; Wroblewski, Matthew S; Seymour, Sean L; Griffin, Timothy J
2013-04-01
Large databases (>10(6) sequences) used in metaproteomic and proteogenomic studies present challenges in matching peptide sequences to MS/MS data using database-search programs. Most notably, strict filtering to avoid false-positive matches leads to more false negatives, thus constraining the number of peptide matches. To address this challenge, we developed a two-step method wherein matches derived from a primary search against a large database were used to create a smaller subset database. The second search was performed against a target-decoy version of this subset database merged with a host database. High confidence peptide sequence matches were then used to infer protein identities. Applying our two-step method for both metaproteomic and proteogenomic analysis resulted in twice the number of high confidence peptide sequence matches in each case, as compared to the conventional one-step method. The two-step method captured almost all of the same peptides matched by the one-step method, with a majority of the additional matches being false negatives from the one-step method. Furthermore, the two-step method improved results regardless of the database search program used. Our results show that our two-step method maximizes the peptide matching sensitivity for applications requiring large databases, especially valuable for proteogenomics and metaproteomics studies. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Analysis on burnup step effect for evaluating reactor criticality and fuel breeding ratio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saputra, Geby; Purnama, Aditya Rizki; Permana, Sidik
Criticality condition of the reactors is one of the important factors for evaluating reactor operation and nuclear fuel breeding ratio is another factor to show nuclear fuel sustainability. This study analyzes the effect of burnup steps and cycle operation step for evaluating the criticality condition of the reactor as well as the performance of nuclear fuel breeding or breeding ratio (BR). Burnup step is performed based on a day step analysis which is varied from 10 days up to 800 days and for cycle operation from 1 cycle up to 8 cycles reactor operations. In addition, calculation efficiency based onmore » the variation of computer processors to run the analysis in term of time (time efficiency in the calculation) have been also investigated. Optimization method for reactor design analysis which is used a large fast breeder reactor type as a reference case was performed by adopting an established reactor design code of JOINT-FR. The results show a criticality condition becomes higher for smaller burnup step (day) and for breeding ratio becomes less for smaller burnup step (day). Some nuclides contribute to make better criticality when smaller burnup step due to individul nuclide half-live. Calculation time for different burnup step shows a correlation with the time consuming requirement for more details step calculation, although the consuming time is not directly equivalent with the how many time the burnup time step is divided.« less
Thickness measurement by two-sided step-heating thermal imaging
NASA Astrophysics Data System (ADS)
Li, Xiaoli; Tao, Ning; Sun, J. G.; Zhang, Cunlin; Zhao, Yuejin
2018-01-01
Infrared thermal imaging is a promising nondestructive technique for thickness prediction. However, it is usually thought to be only appropriate for testing the thickness of thin objects or near-surface structures. In this study, we present a new two-sided step-heating thermal imaging method which employed a low-cost portable halogen lamp as the heating source and verified it with two stainless steel step wedges with thicknesses ranging from 5 mm to 24 mm. We first derived the one-dimensional step-heating thermography theory with the consideration of warm-up time of the lamp, and then applied the nonlinear regression method to fit the experimental data by the derived function to determine the thickness. After evaluating the reliability and accuracy of the experimental results, we concluded that this method is capable of testing thick objects. In addition, we provided the criterions for both the required data length and the applicable thickness range of the testing material. It is evident that this method will broaden the thermal imaging application for thickness measurement.
Manufacturing challenges in the commercial production of recombinant coagulation factor VIII.
Jiang, R; Monroe, T; McRogers, R; Larson, P J
2002-03-01
Advances in gene technology have led to the development of a method to manufacture recombinant coagulation Factor VIII (rFVIII) for haemophilia A. Because rFVIII is a large and complex protein, its commercialization has required that many challenges in manufacturing, purification and processing be overcome. In order to license the first generation of rFVIII (Kogenate) in 1993, Bayer Corporation invested over 10 years in research and manufacturing development. Seven additional years were subsequently devoted to research and manufacturing improvements in order to accomplish the recent licensing of a second rFVIII product (KOGENATE Bayer or Kogenate FS). This product differs from its predecessor, in that human albumin is removed from the purification and the formulation steps. In addition, fewer chromatography steps are involved resulting in greater yields per mL of conditioned medium, and a solvent-detergent viral inactivation step replaces the heat-processing step used for the previous product. Despite these changes in the manufacturing, the protein backbone and carbohydrate structure of the final rFVIII molecule are identical. The complexity of the production processes is reflected by over 100 000 manufacturing data entries and by 600 quality control tests for each batch of rFVIII. Manufacturers are continuing to develop the next generation of rFVIII, which will be produced without the addition of any human or animal proteins or byproducts. Investments in research, development and manufacturing technology are expected to result in the development of new products with enhanced safety profiles, and in an increase in the production capacity for products that are chronically in short supply.
Mathew, Hanna; Kunde, Wilfried; Herbort, Oliver
2017-05-01
When someone grasps an object, the grasp depends on the intended object manipulation and usually facilitates it. If several object manipulation steps are planned, the first step has been reported to primarily determine the grasp selection. We address whether the grasp can be aligned to the second step, if the second step's requirements exceed those of the first step. Participants grasped and rotated a dial first by a small extent and then by various extents in the opposite direction, without releasing the dial. On average, when the requirements of the first and the second step were similar, participants mostly aligned the grasp to the first step. When the requirements of the second step were considerably higher, participants aligned the grasp to the second step, even though the first step still had a considerable impact. Participants employed two different strategies. One subgroup initially aligned the grasp to the first step and then ceased adjusting the grasp to either step. Another group also initially aligned the grasp to the first step and then switched to aligning it primarily to the second step. The data suggest that participants are more likely to switch to the latter strategy when they experienced more awkward arm postures. In summary, grasp selections for multi-step object manipulations can be aligned to the second object manipulation step, if the requirements of this step clearly exceed those of the first step and if participants have some experience with the task.
NASA Technical Reports Server (NTRS)
Brumfield, M. L. (Compiler)
1984-01-01
A plan to develop a space technology experiments platform (STEP) was examined. NASA Langley Research Center held a STEP Experiment Requirements Workshop on June 29 and 30 and July 1, 1983, at which experiment proposers were invited to present more detailed information on their experiment concept and requirements. A feasibility and preliminary definition study was conducted and the preliminary definition of STEP capabilities and experiment concepts and expected requirements for support services are presented. The preliminary definition of STEP capabilities based on detailed review of potential experiment requirements is investigated. Topics discussed include: Shuttle on-orbit dynamics; effects of the space environment on damping materials; erectable beam experiment; technology for development of very large solar array deployers; thermal energy management process experiment; photovoltaic concentrater pointing dynamics and plasma interactions; vibration isolation technology; flight tests of a synthetic aperture radar antenna with use of STEP.
NASA Astrophysics Data System (ADS)
Utama, M. Iqbal Bakti; Lu, Xin; Zhan, Da; Ha, Son Tung; Yuan, Yanwen; Shen, Zexiang; Xiong, Qihua
2014-10-01
Patterning two-dimensional materials into specific spatial arrangements and geometries is essential for both fundamental studies of materials and practical applications in electronics. However, the currently available patterning methods generally require etching steps that rely on complicated and expensive procedures. We report here a facile patterning method for atomically thin MoSe2 films using stripping with an SU-8 negative resist layer exposed to electron beam lithography. Additional steps of chemical and physical etching were not necessary in this SU-8 patterning method. The SU-8 patterning was used to define a ribbon channel from a field effect transistor of MoSe2 film, which was grown by chemical vapor deposition. The narrowing of the conduction channel area with SU-8 patterning was crucial in suppressing the leakage current within the device, thereby allowing a more accurate interpretation of the electrical characterization results from the sample. An electrical transport study, enabled by the SU-8 patterning, showed a variable range hopping behavior at high temperatures.Patterning two-dimensional materials into specific spatial arrangements and geometries is essential for both fundamental studies of materials and practical applications in electronics. However, the currently available patterning methods generally require etching steps that rely on complicated and expensive procedures. We report here a facile patterning method for atomically thin MoSe2 films using stripping with an SU-8 negative resist layer exposed to electron beam lithography. Additional steps of chemical and physical etching were not necessary in this SU-8 patterning method. The SU-8 patterning was used to define a ribbon channel from a field effect transistor of MoSe2 film, which was grown by chemical vapor deposition. The narrowing of the conduction channel area with SU-8 patterning was crucial in suppressing the leakage current within the device, thereby allowing a more accurate interpretation of the electrical characterization results from the sample. An electrical transport study, enabled by the SU-8 patterning, showed a variable range hopping behavior at high temperatures. Electronic supplementary information (ESI) available: Further experiments on patterning and additional electrical characterizations data. See DOI: 10.1039/c4nr03817g
Improved Methodology for Developing Cost Uncertainty Models for Naval Vessels
2008-09-01
Growth: Last 700 Years (From: Deegan , 2007b) ................13 Figure 3. Business Rules to Consider: Choosing an acceptable cost risk point...requires an understanding of consequence (From: Deegan , 2007b)...............16 Figure 4. Basic Steps in Estimating Probable Systems Cost (From: Book...her guidance and assistance in the development of this thesis. Additionally, I thank Mr. Chris Deegan , the former Director of Cost Engineering and
Restoring Important Voter Eligibility Requirements to States Act of 2013
Rep. Culberson, John Abney [R-TX-7
2013-07-25
House - 07/25/2013 Referred to the Committee on House Administration, and in addition to the Committee on Ways and Means, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee... (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Navy Additive Manufacturing: Adding Parts, Subtracting Steps
2015-06-01
complex weapon systems within designed specifications requires extensive routine and preventative maintenance as well as expeditious repairs when...failures occur. These repairs are sometimes complex and often unpredictable in both peace and wartime environments. To keep these weapon systems...basis. The solution is not a simple one, but rather one of high complexity that cannot just be adopted from a big-box store such as Walmart, Target
To require health insurance coverage for the treatment of infertility.
Rep. DeLauro, Rosa L. [D-CT-3
2018-05-24
House - 05/24/2018 Referred to the Committee on Energy and Commerce, and in addition to the Committees on Oversight and Government Reform, Armed Services, and Veterans' Affairs, for a period to be subsequently determined by the Speaker, in each case for consideration of such... (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Sen. Shaheen, Jeanne [D-NH
2013-05-21
Senate - 06/04/2013 Committee on Armed Services. Hearings held. Hearings printed: S.Hrg. 113-320. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-15
... Board to understand the claim. Proposed Bd.R. 41.37(c)(1)(v) would also apply to each means plus function or step plus function recitation in dispute by appellant. Additionally, Proposed Bd.R. 41.37(c)(1.... The ANPRM also proposed to revise this requirement to call for an annotated copy of every means plus...
Carty, N C; Xu, J; Kurup, P; Brouillette, J; Goebel-Goody, S M; Austin, D R; Yuan, P; Chen, G; Correa, P R; Haroutunian, V; Pittenger, C; Lombroso, P J
2012-01-01
Glutamatergic signaling through N-methyl-D-aspartate receptors (NMDARs) is required for synaptic plasticity. Disruptions in glutamatergic signaling are proposed to contribute to the behavioral and cognitive deficits observed in schizophrenia (SZ). One possible source of compromised glutamatergic function in SZ is decreased surface expression of GluN2B-containing NMDARs. STEP61 is a brain-enriched protein tyrosine phosphatase that dephosphorylates a regulatory tyrosine on GluN2B, thereby promoting its internalization. Here, we report that STEP61 levels are significantly higher in the postmortem anterior cingulate cortex and dorsolateral prefrontal cortex of SZ patients, as well as in mice treated with the psychotomimetics MK-801 and phencyclidine (PCP). Accumulation of STEP61 after MK-801 treatment is due to a disruption in the ubiquitin proteasome system that normally degrades STEP61. STEP knockout mice are less sensitive to both the locomotor and cognitive effects of acute and chronic administration of PCP, supporting the functional relevance of increased STEP61 levels in SZ. In addition, chronic treatment of mice with both typical and atypical antipsychotic medications results in a protein kinase A-mediated phosphorylation and inactivation of STEP61 and, consequently, increased surface expression of GluN1/GluN2B receptors. Taken together, our findings suggest that STEP61 accumulation may contribute to the pathophysiology of SZ. Moreover, we show a mechanistic link between neuroleptic treatment, STEP61 inactivation and increased surface expression of NMDARs, consistent with the glutamate hypothesis of SZ. PMID:22781170
Ongoing Development of a Series Bosch Reactor System
NASA Technical Reports Server (NTRS)
Abney, Morgan; Mansell, Matt; DuMez, Sam; Thomas, John; Cooper, Charlie; Long, David
2013-01-01
Future manned missions to deep space or planetary surfaces will undoubtedly require highly robust, efficient, and regenerable life support systems that require minimal consumables. To meet this requirement, NASA continues to explore a Bosch-based carbon dioxide reduction system to recover oxygen from CO2. In order to improve the equivalent system mass of Bosch systems, we seek to design and test a "Series Bosch" system in which two reactors in series are optimized for the two steps of the reaction, as well as to explore the use of in situ materials as carbon deposition catalysts. Here we report recent developments in this effort including assembly and initial testing of a Reverse Water-Gas Shift reactor (RWGSr) and initial testing of two gas separation membranes. The RWGSr was sized to reduce CO2 produced by a crew of four to carbon monoxide as the first stage in a Series Bosch system. The gas separation membranes, necessary to recycle unreacted hydrogen and CO2, were similarly sized. Additionally, we report results of preliminary experiments designed to determine the catalytic properties of Martian and Lunar regolith simulant for the carbon deposition step.
β-Catenin Is Required for Hair-Cell Differentiation in the Cochlea
Hu, Lingxiang; Jacques, Bonnie E.; Mulvaney, Joanna F.; Dabdoub, Alain
2014-01-01
The development of hair cells in the auditory system can be separated into steps; first, the establishment of progenitors for the sensory epithelium, and second, the differentiation of hair cells. Although the differentiation of hair cells is known to require the expression of basic helix-loop-helix transcription factor, Atoh1, the control of cell proliferation in the region of the developing cochlea that will ultimately become the sensory epithelium and the cues that initiate Atoh1 expression remain obscure. We assessed the role of Wnt/β-catenin in both steps in gain- and loss-of-function models in mice. The canonical Wnt pathway mediator, β-catenin, controls the expression of Atoh1. Knock-out of β-catenin inhibited hair-cell, as well as pillar-cell, differentiation from sensory progenitors but was not required to maintain a hair-cell fate once specified. Constitutive activation of β-catenin expanded sensory progenitors by inducing additional cell division and resulted in the differentiation of extra hair cells. Our data demonstrate that β-catenin plays a role in cell division and differentiation in the cochlear sensory epithelium. PMID:24806673
Effect of handpiece maintenance method on bond strength.
Roberts, Howard W; Vandewalle, Kraig S; Charlton, David G; Leonard, Daniel L
2005-01-01
This study evaluated the effect of dental handpiece lubricant on the shear bond strength of three bonding agents to dentin. A lubrication-free handpiece (one that does not require the user to lubricate it) and a handpiece requiring routine lubrication were used in the study. In addition, two different handpiece lubrication methods (automated versus manual application) were also investigated. One hundred and eighty extracted human teeth were ground to expose flat dentin surfaces that were then finished with wet silicon carbide paper. The teeth were randomly divided into 18 groups (n=10). The dentin surface of each specimen was exposed for 30 seconds to water spray from either a lubrication-free handpiece or a lubricated handpiece. Prior to exposure, various lubrication regimens were used on the handpieces that required lubrication. The dentin surfaces were then treated with total-etch, two-step; a self-etch, two-step or a self-etch, one-step bonding agent. Resin composite cylinders were bonded to dentin, the specimens were then thermocycled and tested to failure in shear at seven days. Mean bond strength data were analyzed using Dunnett's multiple comparison test at an 0.05 level of significance. Results indicated that within each of the bonding agents, there were no significant differences in bond strength between the control group and the treatment groups regardless of the type of handpiece or use of routine lubrication.
In-Space Cryogenic Propellant Depot Stepping Stone
NASA Technical Reports Server (NTRS)
Howell, Joe T.; Mankins, John C.; Fikes, John C.
2005-01-01
An In-Space Cryogenic Propellant Depot (ISCPD) is an important stepping stone to provide the capability to preposition, store, manufacture, and later use the propellants for Earth-Neighborhood campaigns and beyond. An in-space propellant depot will provide affordable propellants and other similar consumables to support the development of sustainable and affordable exploration strategies as well as commercial space activities. An in-space propellant depot not only requires technology development in key areas such as zero boil-off storage and fluid transfer, but in other areas such as lightweight structures, highly reliable connectors, and autonomous operations. These technologies can be applicable to a broad range of propellant depot concepts or specific to a certain design. In addition, these technologies are required for spacecraft and orbit transfer vehicle propulsion and power systems, and space life support. Generally, applications of this technology require long-term storage, on-orbit fluid transfer and supply, cryogenic propellant production from water, unique instrumentation and autonomous operations. This paper discusses the reasons why such advances are important to future affordable and sustainable operations in space. This paper also discusses briefly R&D objectives comprising a promising approach to the systems planning and evolution into a meaningful stepping stone design, development, and implementation of an In-Space Cryogenic Propellant Depot. The success of a well-planned and orchestrated approach holds great promise for achieving innovation and revolutionary technology development for supporting future exploration and development of space.
A General Method for Solving Systems of Non-Linear Equations
NASA Technical Reports Server (NTRS)
Nachtsheim, Philip R.; Deiss, Ron (Technical Monitor)
1995-01-01
The method of steepest descent is modified so that accelerated convergence is achieved near a root. It is assumed that the function of interest can be approximated near a root by a quadratic form. An eigenvector of the quadratic form is found by evaluating the function and its gradient at an arbitrary point and another suitably selected point. The terminal point of the eigenvector is chosen to lie on the line segment joining the two points. The terminal point found lies on an axis of the quadratic form. The selection of a suitable step size at this point leads directly to the root in the direction of steepest descent in a single step. Newton's root finding method not infrequently diverges if the starting point is far from the root. However, the current method in these regions merely reverts to the method of steepest descent with an adaptive step size. The current method's performance should match that of the Levenberg-Marquardt root finding method since they both share the ability to converge from a starting point far from the root and both exhibit quadratic convergence near a root. The Levenberg-Marquardt method requires storage for coefficients of linear equations. The current method which does not require the solution of linear equations requires more time for additional function and gradient evaluations. The classic trade off of time for space separates the two methods.
2017-01-01
Crystal size and shape can be manipulated to enhance the qualities of the final product. In this work the steady-state shape and size of succinic acid crystals, with and without a polymeric additive (Pluronic P123) at 350 mL, scale is reported. The effect of the amplitude of cycles as well as the heating/cooling rates is described, and convergent cycling (direct nucleation control) is compared to static cycling. The results show that the shape of succinic acid crystals changes from plate- to diamond-like after multiple cycling steps, and that the time required for this morphology change to occur is strongly related to the type of cycling. Addition of the polymer is shown to affect both the final shape of the crystals and the time needed to reach size and shape steady-state conditions. It is shown how this phenomenon can be used to improve the design of the crystallization step in order to achieve more efficient downstream operations and, in general, to help optimize the whole manufacturing process. PMID:28867966
NASA Astrophysics Data System (ADS)
Lippert, Ross A.; Predescu, Cristian; Ierardi, Douglas J.; Mackenzie, Kenneth M.; Eastwood, Michael P.; Dror, Ron O.; Shaw, David E.
2013-10-01
In molecular dynamics simulations, control over temperature and pressure is typically achieved by augmenting the original system with additional dynamical variables to create a thermostat and a barostat, respectively. These variables generally evolve on timescales much longer than those of particle motion, but typical integrator implementations update the additional variables along with the particle positions and momenta at each time step. We present a framework that replaces the traditional integration procedure with separate barostat, thermostat, and Newtonian particle motion updates, allowing thermostat and barostat updates to be applied infrequently. Such infrequent updates provide a particularly substantial performance advantage for simulations parallelized across many computer processors, because thermostat and barostat updates typically require communication among all processors. Infrequent updates can also improve accuracy by alleviating certain sources of error associated with limited-precision arithmetic. In addition, separating the barostat, thermostat, and particle motion update steps reduces certain truncation errors, bringing the time-average pressure closer to its target value. Finally, this framework, which we have implemented on both general-purpose and special-purpose hardware, reduces software complexity and improves software modularity.
Droplet-Based Segregation and Extraction of Concentrated Samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buie, C R; Buckley, P; Hamilton, J
2007-02-23
Microfluidic analysis often requires sample concentration and separation techniques to isolate and detect analytes of interest. Complex or scarce samples may also require an orthogonal separation and detection method or off-chip analysis to confirm results. To perform these additional steps, the concentrated sample plug must be extracted from the primary microfluidic channel with minimal sample loss and dilution. We investigated two extraction techniques; injection of immiscible fluid droplets into the sample stream (''capping'''') and injection of the sample into an immiscible fluid stream (''extraction''). From our results we conclude that capping is the more effective partitioning technique. Furthermore, this functionalitymore » enables additional off-chip post-processing procedures such as DNA/RNA microarray analysis, realtime polymerase chain reaction (RT-PCR), and culture growth to validate chip performance.« less
Cook, Ronald Lee; Elliott, Brian John; Luebben, Silvia DeVito; Myers, Andrew William; Smith, Bryan Matthew
2005-05-03
A new class of surface modified particles and a multi-step Michael-type addition surface modification process for the preparation of the same is provided. The multi-step Michael-type addition surface modification process involves two or more reactions to compatibilize particles with various host systems and/or to provide the particles with particular chemical reactivities. The initial step comprises the attachment of a small organic compound to the surface of the inorganic particle. The subsequent steps attach additional compounds to the previously attached organic compounds through reactive organic linking groups. Specifically, these reactive groups are activated carbon—carbon pi bonds and carbon and non-carbon nucleophiles that react via Michael or Michael-type additions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, R. Derek; Gunther, Jacob H.; Moon, Todd K.
In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less
West, R. Derek; Gunther, Jacob H.; Moon, Todd K.
2016-12-01
In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less
Effectiveness of en masse versus two-step retraction: a systematic review and meta-analysis.
Rizk, Mumen Z; Mohammed, Hisham; Ismael, Omar; Bearn, David R
2018-01-05
This review aims to compare the effectiveness of en masse and two-step retraction methods during orthodontic space closure regarding anchorage preservation and anterior segment retraction and to assess their effect on the duration of treatment and root resorption. An electronic search for potentially eligible randomized controlled trials and prospective controlled trials was performed in five electronic databases up to July 2017. The process of study selection, data extraction, and quality assessment was performed by two reviewers independently. A narrative review is presented in addition to a quantitative synthesis of the pooled results where possible. The Cochrane risk of bias tool and the Newcastle-Ottawa Scale were used for the methodological quality assessment of the included studies. Eight studies were included in the qualitative synthesis in this review. Four studies were included in the quantitative synthesis. En masse/miniscrew combination showed a statistically significant standard mean difference regarding anchorage preservation - 2.55 mm (95% CI - 2.99 to - 2.11) and the amount of upper incisor retraction - 0.38 mm (95% CI - 0.70 to - 0.06) when compared to a two-step/conventional anchorage combination. Qualitative synthesis suggested that en masse retraction requires less time than two-step retraction with no difference in the amount of root resorption. Both en masse and two-step retraction methods are effective during the space closure phase. The en masse/miniscrew combination is superior to the two-step/conventional anchorage combination with regard to anchorage preservation and amount of retraction. Limited evidence suggests that anchorage reinforcement with a headgear produces similar results with both retraction methods. Limited evidence also suggests that en masse retraction may require less time and that no significant differences exist in the amount of root resorption between the two methods.
Nagano, Hanatsu; Levinger, Pazit; Downie, Calum; Hayes, Alan; Begg, Rezaul
2015-09-01
Falls during walking reflect susceptibility to balance loss and the individual's capacity to recover stability. Balance can be recovered using either one step or multiple steps but both responses are impaired with ageing. To investigate older adults' (n=15, 72.5±4.8 yrs) recovery step control a tether-release procedure was devised to induce unanticipated forward balance loss. Three-dimensional position-time data combined with foot-ground reaction forces were used to measure balance recovery. Dependent variables were; margin of stability (MoS) and available response time (ART) for spatial and temporal balance measures in the transverse and sagittal planes; lower limb joint angles and joint negative/positive work; and spatio-temporal gait parameters. Relative to multi-step responses, single-step recovery was more effective in maintaining balance, indicated by greater MoS and longer ART. MoS in the sagittal plane measure and ART in the transverse plane distinguished single step responses from multiple steps. When MoS and ART were negative (<0), balance was not secured and additional steps would be required to establish the new base of support for balance recovery. Single-step responses demonstrated greater step length and velocity and when the recovery foot landed, greater centre of mass downward velocity. Single-step strategies also showed greater ankle dorsiflexion, increased knee maximum flexion and more negative work at the ankle and knee. Collectively these findings suggest that single-step responses are more effective in forward balance recovery by directing falling momentum downward to be absorbed as lower limb eccentric work. Copyright © 2015 Elsevier B.V. All rights reserved.
Additive Manufacturing Design Considerations for Liquid Engine Components
NASA Technical Reports Server (NTRS)
Whitten, Dave; Hissam, Andy; Baker, Kevin; Rice, Darron
2014-01-01
The Marshall Space Flight Center's Propulsion Systems Department has gained significant experience in the last year designing, building, and testing liquid engine components using additive manufacturing. The department has developed valve, duct, turbo-machinery, and combustion device components using this technology. Many valuable lessons were learned during this process. These lessons will be the focus of this presentation. We will present criteria for selecting part candidates for additive manufacturing. Some part characteristics are 'tailor made' for this process. Selecting the right parts for the process is the first step to maximizing productivity gains. We will also present specific lessons we learned about feature geometry that can and cannot be produced using additive manufacturing machines. Most liquid engine components were made using a two-step process. The base part was made using additive manufacturing and then traditional machining processes were used to produce the final part. The presentation will describe design accommodations needed to make the base part and lessons we learned about which features could be built directly and which require the final machine process. Tolerance capabilities, surface finish, and material thickness allowances will also be covered. Additive Manufacturing can produce internal passages that cannot be made using traditional approaches. It can also eliminate a significant amount of manpower by reducing part count and leveraging model-based design and analysis techniques. Information will be shared about performance enhancements and design efficiencies we experienced for certain categories of engine parts.
Thermally stable ohmic contacts to n-type GaAs. VII. Addition of Ge or Si to NiInW ohmic contacts
NASA Astrophysics Data System (ADS)
Murakami, Masanori; Price, W. H.; Norcott, M.; Hallali, P.-E.
1990-09-01
The effects of Si or Ge addition to NiInW ohmic contacts on their electrical behavior were studied, where the samples were prepared by evaporating Ni(Si) or Ni(Ge) pellets with In and W and annealed by a rapid thermal annealing method. An addition of Si affected the contact resistances of NiInW contacts: the resistances decreased with increasing the Si concentrations in the Ni(Si) pellets and the lowest value of ˜0.1 Ω mm was obtained in the contact prepared with the Ni-5 at. % Si pellets after annealing at temperatures around 800 °C. The contact resistances did not deteriorate during isothermal annealing at 400 °C for more than 100 h, far exceeding process requirements for self-aligned GaAs metal-semiconductor field-effect-transistor devices. In addition, the contacts were compatible with TiAlCu interconnects which have been widely used in the current Si process. Furthermore, the addition of Si to the NiInW contacts eliminated an annealing step for activation of implanted dopants and low resistance (˜0.2 Ω mm) contacts were fabricated for the first time by a ``one-step'' anneal. In contrast, an addition of Ge to the NiInW contacts did not significantly reduce the contact resistances.
A multistep damage recognition mechanism for global genomic nucleotide excision repair
Sugasawa, Kaoru; Okamoto, Tomoko; Shimizu, Yuichiro; Masutani, Chikahide; Iwai, Shigenori; Hanaoka, Fumio
2001-01-01
A mammalian nucleotide excision repair (NER) factor, the XPC–HR23B complex, can specifically bind to certain DNA lesions and initiate the cell-free repair reaction. Here we describe a detailed analysis of its binding specificity using various DNA substrates, each containing a single defined lesion. A highly sensitive gel mobility shift assay revealed that XPC–HR23B specifically binds a small bubble structure with or without damaged bases, whereas dual incision takes place only when damage is present in the bubble. This is evidence that damage recognition for NER is accomplished through at least two steps; XPC–HR23B first binds to a site that has a DNA helix distortion, and then the presence of injured bases is verified prior to dual incision. Cyclobutane pyrimidine dimers (CPDs) were hardly recognized by XPC–HR23B, suggesting that additional factors may be required for CPD recognition. Although the presence of mismatched bases opposite a CPD potentiated XPC–HR23B binding, probably due to enhancement of the helix distortion, cell-free excision of such compound lesions was much more efficient than expected from the observed affinity for XPC–HR23B. This also suggests that additional factors and steps are required for the recognition of some types of lesions. A multistep mechanism of this sort may provide a molecular basis for ensuring the high level of damage discrimination that is required for global genomic NER. PMID:11238373
A multistep damage recognition mechanism for global genomic nucleotide excision repair.
Sugasawa, K; Okamoto, T; Shimizu, Y; Masutani, C; Iwai, S; Hanaoka, F
2001-03-01
A mammalian nucleotide excision repair (NER) factor, the XPC-HR23B complex, can specifically bind to certain DNA lesions and initiate the cell-free repair reaction. Here we describe a detailed analysis of its binding specificity using various DNA substrates, each containing a single defined lesion. A highly sensitive gel mobility shift assay revealed that XPC-HR23B specifically binds a small bubble structure with or without damaged bases, whereas dual incision takes place only when damage is present in the bubble. This is evidence that damage recognition for NER is accomplished through at least two steps; XPC-HR23B first binds to a site that has a DNA helix distortion, and then the presence of injured bases is verified prior to dual incision. Cyclobutane pyrimidine dimers (CPDs) were hardly recognized by XPC-HR23B, suggesting that additional factors may be required for CPD recognition. Although the presence of mismatched bases opposite a CPD potentiated XPC-HR23B binding, probably due to enhancement of the helix distortion, cell-free excision of such compound lesions was much more efficient than expected from the observed affinity for XPC-HR23B. This also suggests that additional factors and steps are required for the recognition of some types of lesions. A multistep mechanism of this sort may provide a molecular basis for ensuring the high level of damage discrimination that is required for global genomic NER.
Space Telecommunications Radio System (STRS) Compliance Testing
NASA Technical Reports Server (NTRS)
Handler, Louis M.
2011-01-01
The Space Telecommunications Radio System (STRS) defines an open architecture for software defined radios. This document describes the testing methodology to aid in determining the degree of compliance to the STRS architecture. Non-compliances are reported to the software and hardware developers as well as the NASA project manager so that any non-compliances may be fixed or waivers issued. Since the software developers may be divided into those that provide the operating environment including the operating system and STRS infrastructure (OE) and those that supply the waveform applications, the tests are divided accordingly. The static tests are also divided by the availability of an automated tool that determines whether the source code and configuration files contain the appropriate items. Thus, there are six separate step-by-step test procedures described as well as the corresponding requirements that they test. The six types of STRS compliance tests are: STRS application automated testing, STRS infrastructure automated testing, STRS infrastructure testing by compiling WFCCN with the infrastructure, STRS configuration file testing, STRS application manual code testing, and STRS infrastructure manual code testing. Examples of the input and output of the scripts are shown in the appendices as well as more specific information about what to configure and test in WFCCN for non-compliance. In addition, each STRS requirement is listed and the type of testing briefly described. Attached is also a set of guidelines on what to look for in addition to the requirements to aid in the document review process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marcy, Cara; Beiter, Philipp
2016-09-01
This report provides a high-level indicator of the future electricity demand for additional electric power generation that is not met by existing generation sources between 2015 and 2050. The indicator is applied to coastal regions, including the Great Lakes, to assess the regional opportunity space for offshore wind. An assessment of opportunity space can be a first step in determining the prospects and the system value of a technology. The metric provides the maximal amount of additional generation that is likely required to satisfy load in future years.
To require the sale of distressed notes and other obligations, and for other purposes.
Rep. Kelly, Mike [R-PA-3
2018-06-14
House - 06/14/2018 Referred to the Committee on Agriculture, and in addition to the Committee on Transportation and Infrastructure, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the... (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Open Architecture as an Enabler for FORCEnet Cruise Missile Defense
2007-09-01
2007). Step 4 introduces another tool called the Strengths, Weaknesses, Opportunities, and Threats ( SWOT ) analysis. Once the TRO has been identified...the SWOT analysis can be used to help in the pursuit of that objective or mission objective. SWOT is defined as Strengths: attributes of the...overtime. In addition to the SCAN and SWOT , analysis processes also needed are Automated Battle Management Aids (ABMA) tools that are required to
ERIC Educational Resources Information Center
Scott, George A.
2012-01-01
The School Improvement Grant (SIG) program funds reforms in low performing schools. Congress provided $3.5 billion for SIG in fiscal year 2009, and a total of about $1.6 billion was appropriated in fiscal years 2010-2012. SIG requirements changed significantly in 2010. Many schools receiving SIG funds must now use the funding for specific…
An Update on the NASA Planetary Science Division Research and Analysis Program
NASA Astrophysics Data System (ADS)
Richey, Christina; Bernstein, Max; Rall, Jonathan
2015-01-01
Introduction: NASA's Planetary Science Division (PSD) solicits its Research and Analysis (R&A) programs each year in Research Opportunities in Space and Earth Sciences (ROSES). Beginning with the 2014 ROSES solicitation, PSD will be changing the structure of the program elements under which the majority of planetary science R&A is done. Major changes include the creation of five core research program elements aligned with PSD's strategic science questions, the introduction of several new R&A opportunities, new submission requirements, and a new timeline for proposal submissionROSES and NSPIRES: ROSES contains the research announcements for all of SMD. Submission of ROSES proposals is done electronically via NSPIRES: http://nspires.nasaprs.com. We will present further details on the proposal submission process to help guide younger scientists. Statistical trends, including the average award size within the PSD programs, selections rates, and lessons learned, will be presented. Information on new programs will also be presented, if available.Review Process and Volunteering: The SARA website (http://sara.nasa.gov) contains information on all ROSES solicitations. There is an email address (SARA@nasa.gov) for inquiries and an area for volunteer reviewers to sign up. The peer review process is based on Scientific/Technical Merit, Relevance, and Level of Effort, and will be detailed within this presentation.ROSES 2014 submission changes: All PSD programs will use a two-step proposal submission process. A Step-1 proposal is required and must be submitted electronically by the Step-1 due date. The Step-1 proposal should include a description of the science goals and objectives to be addressed by the proposal, a brief description of the methodology to be used to address the science goals and objectives, and the relevance of the proposed research to the call submitted to.Additional Information: Additional details will be provided on the Cassini Data Analysis Program, the Exoplanets Research program and Discovery Data Analysis Program, for which Dr. Richey is the Lead Program Officer.
NASA Astrophysics Data System (ADS)
Ravanbakhsh, A.; Kulkarni, S. R.; Panitzsch, L.; L Richards, M.; Munoz Hernandez, A.; Seimetz, L.; Elftmann, R.; Mahesh, Y.; Boden, S.; Boettcher, S. I.; Kulemzin, A.; Martin-Garcia, C.; Prieto, M.; Rodriguez-Pacheco, J.; Sanchez Prieto, S.; Schuster, B.; Steinhagen, J.; Tammen, J.; Wimmer-Schweingruber, R. F.
2016-12-01
Solar Orbiter is ESA's next solar and heliospheric mission which is planned to be launched in October 2018. The Energetic Particle Detector (EPD) on board on Solar Orbiter will provide key measurements for the Solar Orbiter science objectives. The EPD suite consists of four sensors; STEP, SIS, EPT and HET. The University of Kiel in Germany is responsible for the design, development, and building of STEP, and the two identical units EPT-HET 1 and EPT-HET 2. ESA's Solar Orbiter will explore the heliosphere at heliocentric distances between 0.28AU and 0.9AU and with inclination up to 38deg with respect to the Sun's equator. The spacecraft uses a heat shield to protect the bus and externally mounted instruments from the solar flux at the close distances to the sun. All three EPD-Kiel units are mounted externally but in different positions on the spacecraft outer deck. Although being protected by the spacecraft heat shield from high solar flux, EPT-HET1 and EPT-HET-2 as well as STEP experience a harsh environmental condition during the course of the mission. In addition due to the highly demanding science requirements, the qualification and acceptance test requirements of these externally mounted units are quite challenging. In this paper we present the development status of the EPT-HET 1, EPT-HET 2 and STEP sensors focusing on the activities performed in phase D and the qualification and acceptance test campaigns. The main objective of these test campaigns is to ensure and demonstrate the compatibility between the scientific requirements and the harsh environment expected during the mission. This paper includes the results summary of the environmental tests on the EPT-HET and STEP Proto-Qualification Models (PQMs) as well as Proto-Flight Models (PFMs). Only an adequate selection of environmental qualification and acceptance campaigns will guarantee the success of the scientific space missions.
Molgenis-impute: imputation pipeline in a box.
Kanterakis, Alexandros; Deelen, Patrick; van Dijk, Freerk; Byelas, Heorhiy; Dijkstra, Martijn; Swertz, Morris A
2015-08-19
Genotype imputation is an important procedure in current genomic analysis such as genome-wide association studies, meta-analyses and fine mapping. Although high quality tools are available that perform the steps of this process, considerable effort and expertise is required to set up and run a best practice imputation pipeline, particularly for larger genotype datasets, where imputation has to scale out in parallel on computer clusters. Here we present MOLGENIS-impute, an 'imputation in a box' solution that seamlessly and transparently automates the set up and running of all the steps of the imputation process. These steps include genome build liftover (liftovering), genotype phasing with SHAPEIT2, quality control, sample and chromosomal chunking/merging, and imputation with IMPUTE2. MOLGENIS-impute builds on MOLGENIS-compute, a simple pipeline management platform for submission and monitoring of bioinformatics tasks in High Performance Computing (HPC) environments like local/cloud servers, clusters and grids. All the required tools, data and scripts are downloaded and installed in a single step. Researchers with diverse backgrounds and expertise have tested MOLGENIS-impute on different locations and imputed over 30,000 samples so far using the 1,000 Genomes Project and new Genome of the Netherlands data as the imputation reference. The tests have been performed on PBS/SGE clusters, cloud VMs and in a grid HPC environment. MOLGENIS-impute gives priority to the ease of setting up, configuring and running an imputation. It has minimal dependencies and wraps the pipeline in a simple command line interface, without sacrificing flexibility to adapt or limiting the options of underlying imputation tools. It does not require knowledge of a workflow system or programming, and is targeted at researchers who just want to apply best practices in imputation via simple commands. It is built on the MOLGENIS compute workflow framework to enable customization with additional computational steps or it can be included in other bioinformatics pipelines. It is available as open source from: https://github.com/molgenis/molgenis-imputation.
pH-Controlled Two-Step Uncoating of Influenza Virus
Li, Sai; Sieben, Christian; Ludwig, Kai; Höfer, Chris T.; Chiantia, Salvatore; Herrmann, Andreas; Eghiaian, Frederic; Schaap, Iwan A.T.
2014-01-01
Upon endocytosis in its cellular host, influenza A virus transits via early to late endosomes. To efficiently release its genome, the composite viral shell must undergo significant structural rearrangement, but the exact sequence of events leading to viral uncoating remains largely speculative. In addition, no change in viral structure has ever been identified at the level of early endosomes, raising a question about their role. We performed AFM indentation on single viruses in conjunction with cellular assays under conditions that mimicked gradual acidification from early to late endosomes. We found that the release of the influenza genome requires sequential exposure to the pH of both early and late endosomes, with each step corresponding to changes in the virus mechanical response. Step 1 (pH 7.5–6) involves a modification of both hemagglutinin and the viral lumen and is reversible, whereas Step 2 (pH <6.0) involves M1 dissociation and major hemagglutinin conformational changes and is irreversible. Bypassing the early-endosomal pH step or blocking the envelope proton channel M2 precludes proper genome release and efficient infection, illustrating the importance of viral lumen acidification during the early endosomal residence for influenza virus infection. PMID:24703306
Postprocessing Algorithm for Driving Conventional Scanning Tunneling Microscope at Fast Scan Rates.
Zhang, Hao; Li, Xianqi; Chen, Yunmei; Park, Jewook; Li, An-Ping; Zhang, X-G
2017-01-01
We present an image postprocessing framework for Scanning Tunneling Microscope (STM) to reduce the strong spurious oscillations and scan line noise at fast scan rates and preserve the features, allowing an order of magnitude increase in the scan rate without upgrading the hardware. The proposed method consists of two steps for large scale images and four steps for atomic scale images. For large scale images, we first apply for each line an image registration method to align the forward and backward scans of the same line. In the second step we apply a "rubber band" model which is solved by a novel Constrained Adaptive and Iterative Filtering Algorithm (CIAFA). The numerical results on measurement from copper(111) surface indicate the processed images are comparable in accuracy to data obtained with a slow scan rate, but are free of the scan drift error commonly seen in slow scan data. For atomic scale images, an additional first step to remove line-by-line strong background fluctuations and a fourth step of replacing the postprocessed image by its ranking map as the final atomic resolution image are required. The resulting image restores the lattice image that is nearly undetectable in the original fast scan data.
Evans, Christopher M; Love, Alyssa M; Weiss, Emily A
2012-10-17
This article reports control of the competition between step-growth and living chain-growth polymerization mechanisms in the formation of cadmium chalcogenide colloidal quantum dots (QDs) from CdSe(S) clusters by varying the concentration of anionic surfactant in the synthetic reaction mixture. The growth of the particles proceeds by step-addition from initially nucleated clusters in the absence of excess phosphinic or carboxylic acids, which adsorb as their anionic conjugate bases, and proceeds indirectly by dissolution of clusters, and subsequent chain-addition of monomers to stable clusters (Ostwald ripening) in the presence of excess phosphinic or carboxylic acid. Fusion of clusters by step-growth polymerization is an explanation for the consistent observation of so-called "magic-sized" clusters in QD growth reactions. Living chain-addition (chain addition with no explicit termination step) produces QDs over a larger range of sizes with better size dispersity than step-addition. Tuning the molar ratio of surfactant to Se(2-)(S(2-)), the limiting ionic reagent, within the living chain-addition polymerization allows for stoichiometric control of QD radius without relying on reaction time.
Grawunder, U; Lieber, M R
1997-01-01
The recombination activating gene (RAG) 1 and 2 proteins are required for initiation of V(D)J recombination in vivo and have been shown to be sufficient to introduce DNA double-strand breaks at recombination signal sequences (RSSs) in a cell-free assay in vitro. RSSs consist of a highly conserved palindromic heptamer that is separated from a slightly less conserved A/T-rich nonamer by either a 12 or 23 bp spacer of random sequence. Despite the high sequence specificity of RAG-mediated cleavage at RSSs, direct binding of the RAG proteins to these sequences has been difficult to demonstrate by standard methods. Even when this can be demonstrated, questions about the order of events for an individual RAG-RSS complex will require methods that monitor aspects of the complex during transitions from one step of the reaction to the next. Here we have used template-independent DNA polymerase terminal deoxynucleotidyl transferase (TdT) in order to assess occupancy of the reaction intermediates by the RAG complex during the reaction. In addition, this approach allows analysis of the accessibility of end products of a RAG-catalyzed cleavage reaction for N nucleotide addition. The results indicate that RAG proteins form a long-lived complex with the RSS once the initial nick is generated, because the 3'-OH group at the nick remains obstructed for TdT-catalyzed N nucleotide addition. In contrast, the 3'-OH group generated at the signal end after completion of the cleavage reaction can be efficiently tailed by TdT, suggesting that the RAG proteins disassemble from the signal end after DNA double-strand cleavage has been completed. Therefore, a single RAG complex maintains occupancy from the first step (nick formation) to the second step (cleavage). In addition, the results suggest that N region diversity at V(D)J junctions within rearranged immunoglobulin and T cell receptor gene loci can only be introduced after the generation of RAG-catalyzed DNA double-strand breaks, i.e. during the DNA end joining phase of the V(D)J recombination reaction. PMID:9060432
Advanced Platform Systems Technology study. Volume 4: Technology advancement program plan
NASA Technical Reports Server (NTRS)
1983-01-01
An overview study of the major technology definition tasks and subtasks along with their interfaces and interrelationships is presented. Although not specifically indicated in the diagram, iterations were required at many steps to finalize the results. The development of the integrated technology advancement plan was initiated by using the results of the previous two tasks, i.e., the trade studies and the preliminary cost and schedule estimates for the selected technologies. Descriptions for the development of each viable technology advancement was drawn from the trade studies. Additionally, a logic flow diagram depicting the steps in developing each technology element was developed along with descriptions for each of the major elements. Next, major elements of the logic flow diagrams were time phased, and that allowed the definition of a technology development schedule that was consistent with the space station program schedule when possible. Schedules show the major milestone including tests required as described in the logic flow diagrams.
MPA Portable: A Stand-Alone Software Package for Analyzing Metaproteome Samples on the Go.
Muth, Thilo; Kohrs, Fabian; Heyer, Robert; Benndorf, Dirk; Rapp, Erdmann; Reichl, Udo; Martens, Lennart; Renard, Bernhard Y
2018-01-02
Metaproteomics, the mass spectrometry-based analysis of proteins from multispecies samples faces severe challenges concerning data analysis and results interpretation. To overcome these shortcomings, we here introduce the MetaProteomeAnalyzer (MPA) Portable software. In contrast to the original server-based MPA application, this newly developed tool no longer requires computational expertise for installation and is now independent of any relational database system. In addition, MPA Portable now supports state-of-the-art database search engines and a convenient command line interface for high-performance data processing tasks. While search engine results can easily be combined to increase the protein identification yield, an additional two-step workflow is implemented to provide sufficient analysis resolution for further postprocessing steps, such as protein grouping as well as taxonomic and functional annotation. Our new application has been developed with a focus on intuitive usability, adherence to data standards, and adaptation to Web-based workflow platforms. The open source software package can be found at https://github.com/compomics/meta-proteome-analyzer .
Efficient hybrid metrology for focus, CD, and overlay
NASA Astrophysics Data System (ADS)
Tel, W. T.; Segers, B.; Anunciado, R.; Zhang, Y.; Wong, P.; Hasan, T.; Prentice, C.
2017-03-01
In the advent of multiple patterning techniques in semiconductor industry, metrology has progressively become a burden. With multiple patterning techniques such as Litho-Etch-Litho-Etch and Sidewall Assisted Double Patterning, the number of processing step have increased significantly and therefore, so as the amount of metrology steps needed for both control and yield monitoring. The amount of metrology needed is increasing in each and every node as more layers needed multiple patterning steps, and more patterning steps per layer. In addition to this, there is that need for guided defect inspection, which in itself requires substantially denser focus, overlay, and CD metrology as before. Metrology efficiency will therefore be cruicial to the next semiconductor nodes. ASML's emulated wafer concept offers a highly efficient method for hybrid metrology for focus, CD, and overlay. In this concept metrology is combined with scanner's sensor data in order to predict the on-product performance. The principle underlying the method is to isolate and estimate individual root-causes which are then combined to compute the on-product performance. The goal is to use all the information available to avoid ever increasing amounts of metrology.
Enhancing multi-step quantum state tomography by PhaseLift
NASA Astrophysics Data System (ADS)
Lu, Yiping; Zhao, Qing
2017-09-01
Multi-photon system has been studied by many groups, however the biggest challenge faced is the number of copies of an unknown state are limited and far from detecting quantum entanglement. The difficulty to prepare copies of the state is even more serious for the quantum state tomography. One possible way to solve this problem is to use adaptive quantum state tomography, which means to get a preliminary density matrix in the first step and revise it in the second step. In order to improve the performance of adaptive quantum state tomography, we develop a new distribution scheme of samples and extend it to three steps, that is to correct it once again based on the density matrix obtained in the traditional adaptive quantum state tomography. Our numerical results show that the mean square error of the reconstructed density matrix by our new method is improved to the level from 10-4 to 10-9 for several tested states. In addition, PhaseLift is also applied to reduce the required storage space of measurement operator.
Automated image segmentation-assisted flattening of atomic force microscopy images.
Wang, Yuliang; Lu, Tongda; Li, Xiaolai; Wang, Huimin
2018-01-01
Atomic force microscopy (AFM) images normally exhibit various artifacts. As a result, image flattening is required prior to image analysis. To obtain optimized flattening results, foreground features are generally manually excluded using rectangular masks in image flattening, which is time consuming and inaccurate. In this study, a two-step scheme was proposed to achieve optimized image flattening in an automated manner. In the first step, the convex and concave features in the foreground were automatically segmented with accurate boundary detection. The extracted foreground features were taken as exclusion masks. In the second step, data points in the background were fitted as polynomial curves/surfaces, which were then subtracted from raw images to get the flattened images. Moreover, sliding-window-based polynomial fitting was proposed to process images with complex background trends. The working principle of the two-step image flattening scheme were presented, followed by the investigation of the influence of a sliding-window size and polynomial fitting direction on the flattened images. Additionally, the role of image flattening on the morphological characterization and segmentation of AFM images were verified with the proposed method.
Clean focus, dose and CD metrology for CD uniformity improvement
NASA Astrophysics Data System (ADS)
Lee, Honggoo; Han, Sangjun; Hong, Minhyung; Kim, Seungyoung; Lee, Jieun; Lee, DongYoung; Oh, Eungryong; Choi, Ahlin; Kim, Nakyoon; Robinson, John C.; Mengel, Markus; Pablo, Rovira; Yoo, Sungchul; Getin, Raphael; Choi, Dongsub; Jeon, Sanghuck
2018-03-01
Lithography process control solutions require more exacting capabilities as the semiconductor industry goes forward to the 1x nm node DRAM device manufacturing. In order to continue scaling down the device feature sizes, critical dimension (CD) uniformity requires continuous improvement to meet the required CD error budget. In this study we investigate using optical measurement technology to improve over CD-SEM methods in focus, dose, and CD. One of the key challenges is measuring scanner focus of device patterns. There are focus measurement methods based on specially designed marks on scribe-line, however, one issue of this approach is that it will report focus of scribe line which is potentially different from that of the real device pattern. In addition, scribe-line marks require additional design and troubleshooting steps that add complexity. In this study, we investigated focus measurement directly on the device pattern. Dose control is typically based on using the linear correlation behavior between dose and CD. The noise of CD measurement, based on CD-SEM for example, will not only impact the accuracy, but also will make it difficult to monitor dose signature on product wafers. In this study we will report the direct dose metrology result using an optical metrology system which especially enhances the DUV spectral coverage to improve the signal to noise ratio. CD-SEM is often used to measure CD after the lithography step. This measurement approach has the advantage of easy recipe setup as well as the flexibility to measure critical feature dimensions, however, we observe that CD-SEM metrology has limitations. In this study, we demonstrate within-field CD uniformity improvement through the extraction of clean scanner slit and scan CD behavior by using optical metrology.
Duo, Jia; Dong, Huijin; DeSilva, Binodh; Zhang, Yan J
2013-07-01
Sample dilution and reagent pipetting are time-consuming steps in ligand-binding assays (LBAs). Traditional automation-assisted LBAs use assay-specific scripts that require labor-intensive script writing and user training. Five major script modules were developed on Tecan Freedom EVO liquid handling software to facilitate the automated sample preparation and LBA procedure: sample dilution, sample minimum required dilution, standard/QC minimum required dilution, standard/QC/sample addition, and reagent addition. The modular design of automation scripts allowed the users to assemble an automated assay with minimal script modification. The application of the template was demonstrated in three LBAs to support discovery biotherapeutic programs. The results demonstrated that the modular scripts provided the flexibility in adapting to various LBA formats and the significant time saving in script writing and scientist training. Data generated by the automated process were comparable to those by manual process while the bioanalytical productivity was significantly improved using the modular robotic scripts.
A Framework for RFID Survivability Requirement Analysis and Specification
NASA Astrophysics Data System (ADS)
Zuo, Yanjun; Pimple, Malvika; Lande, Suhas
Many industries are becoming dependent on Radio Frequency Identification (RFID) technology for inventory management and asset tracking. The data collected about tagged objects though RFID is used in various high level business operations. The RFID system should hence be highly available, reliable, and dependable and secure. In addition, this system should be able to resist attacks and perform recovery in case of security incidents. Together these requirements give rise to the notion of a survivable RFID system. The main goal of this paper is to analyze and specify the requirements for an RFID system to become survivable. These requirements, if utilized, can assist the system in resisting against devastating attacks and recovering quickly from damages. This paper proposes the techniques and approaches for RFID survivability requirements analysis and specification. From the perspective of system acquisition and engineering, survivability requirement is the important first step in survivability specification, compliance formulation, and proof verification.
Phase gradient algorithm based on co-axis two-step phase-shifting interferometry and its application
NASA Astrophysics Data System (ADS)
Wang, Yawei; Zhu, Qiong; Xu, Yuanyuan; Xin, Zhiduo; Liu, Jingye
2017-12-01
A phase gradient method based on co-axis two-step phase-shifting interferometry, is used to reveal the detailed information of a specimen. In this method, the phase gradient distribution can only be obtained by calculating both the first-order derivative and the radial Hilbert transformation of the intensity difference between two phase-shifted interferograms. The feasibility and accuracy of this method were fully verified by the simulation results for a polystyrene sphere and a red blood cell. The empirical results demonstrated that phase gradient is sensitive to changes in the refractive index and morphology. Because phase retrieval and tedious phase unwrapping are not required, the calculation speed is faster. In addition, co-axis interferometry has high spatial resolution.
Convergence of the Quasi-static Antenna Design Algorithm
2013-04-01
conductor is the same as an equipotential surface . A line of constant charge on the z-axis, with an image, will generate the ACD antenna design...satisfies this boundary condition. The multipole moments have negative potentials, which can cause the equipotential surface to terminate on the disk or...feed wire. This requires an addition step in the solution process; the equipotential surface is sampled to verify that the charge is enclosed by the
Convergence of the Quasi-static Antenna Design Algorithm
2013-04-01
was the first antenna design with quasi-static methods. In electrostatics, a perfect conductor is the same as an equipotential surface . A line of...which can cause the equipotential surface to terminate on the disk or feed wire. This requires an additional step in the solution process; the... equipotential surface is sampled to verify that the charge is enclosed by the equipotential surface . The final solution must be verified with a detailed
2014 Survivor Experience Survey: Report on Preliminary Results. Fiscal Year 2014, Quarter 4
2014-12-01
contact with the survivor, nor an ability to “track” or determine the survivor’s identity. The challenge , given the limitations noted above, was...was done primarily through SARCs, with additional support from UVAs/VAs and Special Victims’ Legal Counsels/Victims’ Legal Counsels (SVC/ VLC ). These...and SVC/ VLC to recruit their assistance in notifying eligible survivors about the survey effort and steps to obtain a ticket number without requiring
Design and Implementation of Multi-Input Adaptive Signal Extractions.
1982-09-01
deflected gradient) algorithm requiring only N+ l multiplications per adaptation step. Additional quantization is introduced to eliminate all multiplications...noise cancellation for intermittent-signal applications," IEEE Trans. Information Theory, Vol. IT-26. Nov. 1980, pp. 746-750. 1-2 J. Kazakoff and W. A...cancellation," Proc. IEEE, July 1981, Vol. 69, pp. 846-847. *I-10 P. L . Kelly and W. A. Gardner, "Pilot-Directed Adaptive Signal Extraction," Dept. of
1997 NASA/MSFC Summer Teacher Enrichment Program
NASA Technical Reports Server (NTRS)
1999-01-01
This is a report on the follow-up activities conducted for the 1997 NASA Summer Teacher Enrichment Program (STEP), which was held at the George C. Marshall Space Flight Center (MSFC) for the seventh consecutive year. The program was conducted as a six-week session with 17 sixth through twelfth grade math and science teachers from a six-state region (Alabama, Arkansas, Iowa, Louisiana, Mississippi and Missouri). The program began on June 8, 1997, and ended on July 25, 1997. The long-term objectives of the program are to: increase the nation's scientific and technical talent pool with a special emphasis on underrepresented groups, improve the quality of pre-college math and science education, improve math and science literacy, and improve NASA's and pre-college education's understandings of each other's operating environments and needs. Short-term measurable objectives for the MSFC STEP are to: improve the teachers' content and pedagogy knowledge in science and/or mathematics, integrate applications from the teachers' STEP laboratory experiences into science and math curricula, increase the teachers' use of instructional technology, enhance the teachers' leadership skills by requiring them to present workshops and/or inservice programs for other teachers, require the support of the participating teacher(s) by the local school administration through a written commitment, and create networks and partnerships within the education community, both pre-college and college. The follow-up activities for the 1997 STEP included the following: academic-year questionnaire, site visits, academic-year workshop, verification of commitment of support, and additional NASA support.
Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...
2018-04-17
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less
Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models
NASA Astrophysics Data System (ADS)
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.
2018-04-01
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.
Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less
Effect on Baby-Friendly Hospital Steps When Hospitals Implement a Policy to Pay for Infant Formula.
Tarrant, Marie; Lok, Kris Y W; Fong, Daniel Y T; Wu, Kendra M; Lee, Irene L Y; Sham, Alice; Lam, Christine; Bai, Dorothy Li; Wong, Ka Lun; Wong, Emmy M Y; Chan, Noel P T; Dodgson, Joan E
2016-05-01
The Baby-Friendly Hospital Initiative requires hospitals to pay market price for infant formula. No studies have specifically examined the effect of hospitals paying for infant formula on breastfeeding mothers' exposure to Baby-Friendly steps. To investigate the effect of hospitals implementing a policy of paying for infant formula on new mothers' exposure to Baby-Friendly steps and examine the effect of exposure to Baby-Friendly steps on breastfeeding rates. We used a repeated prospective cohort study design. We recruited 2 cohorts of breastfeeding mother-infant pairs (n = 2470) in the immediate postnatal period from 4 Hong Kong public hospitals and followed them by telephone up to 12 months postpartum. We assessed participants' exposure to 6 Baby-Friendly steps by extracting data from the medical record and by maternal self-report. After hospitals began paying for infant formula, new mothers were more likely to experience 4 out of 6 Baby-Friendly steps. Breastfeeding initiation within the first hour increased from 28.7% to 45%, and in-hospital exclusive breastfeeding rates increased from 17.9% to 41.4%. The proportion of mothers who experienced all 6 Baby-Friendly steps increased from 4.8% to 20.5%. The risk of weaning was progressively higher among participants experiencing fewer Baby-Friendly steps. Each additional step experienced by new mothers decreased the risk of breastfeeding cessation by 8% (hazard ratio = 0.92; 95% CI, 0.89-0.95). After implementing a policy of paying for infant formula, breastfeeding mothers were exposed to more Baby-Friendly steps, and exposure to more steps was significantly associated with a lower risk of breastfeeding cessation. © The Author(s) 2015.
Metal- and additive-free photoinduced borylation of haloarenes.
Mfuh, Adelphe M; Schneider, Brett D; Cruces, Westley; Larionov, Oleg V
2017-03-01
Boronic acids and esters have critical roles in the areas of synthetic organic chemistry, molecular sensors, materials science, drug discovery, and catalysis. Many of the current applications of boronic acids and esters require materials with very low levels of transition metal contamination. Most of the current methods for the synthesis of boronic acids, however, require transition metal catalysts and ligands that must be removed via additional purification procedures. This protocol describes a simple, metal- and additive-free method of conversion of haloarenes directly to boronic acids and esters. This photoinduced borylation protocol does not require expensive and toxic metal catalysts or ligands, and it produces innocuous and easy-to-remove by-products. Furthermore, the reaction can be carried out on multigram scales in common-grade solvents without the need for reaction mixtures to be deoxygenated. The setup and purification steps are typically accomplished within 1-3 h. The reactions can be run overnight, and the protocol can be completed within 13-16 h. Two representative procedures that are described in this protocol provide details for preparation of a boronic acid (3-cyanopheylboronic acid) and a boronic ester (1,4-benzenediboronic acid bis(pinacol)ester). We also discuss additional details of the method that will be helpful in the application of the protocol to other haloarene substrates.
Stimulation of the human auditory nerve with optical radiation
NASA Astrophysics Data System (ADS)
Fishman, Andrew; Winkler, Piotr; Mierzwinski, Jozef; Beuth, Wojciech; Izzo Matic, Agnella; Siedlecki, Zygmunt; Teudt, Ingo; Maier, Hannes; Richter, Claus-Peter
2009-02-01
A novel, spatially selective method to stimulate cranial nerves has been proposed: contact free stimulation with optical radiation. The radiation source is an infrared pulsed laser. The Case Report is the first report ever that shows that optical stimulation of the auditory nerve is possible in the human. The ethical approach to conduct any measurements or tests in humans requires efficacy and safety studies in animals, which have been conducted in gerbils. This report represents the first step in a translational research project to initiate a paradigm shift in neural interfaces. A patient was selected who required surgical removal of a large meningioma angiomatum WHO I by a planned transcochlear approach. Prior to cochlear ablation by drilling and subsequent tumor resection, the cochlear nerve was stimulated with a pulsed infrared laser at low radiation energies. Stimulation with optical radiation evoked compound action potentials from the human auditory nerve. Stimulation of the auditory nerve with infrared laser pulses is possible in the human inner ear. The finding is an important step for translating results from animal experiments to human and furthers the development of a novel interface that uses optical radiation to stimulate neurons. Additional measurements are required to optimize the stimulation parameters.
One-step volumetric additive manufacturing of complex polymer structures
Shusteff, Maxim; Browar, Allison E. M.; Kelly, Brett E.; ...
2017-12-01
Two limitations of additive manufacturing methods that arise from layer-based fabrication are slow speed and geometric constraints (which include poor surface quality). Both limitations are overcome in the work reported here, introducing a new volumetric additive fabrication paradigm that produces photopolymer structures with complex non-periodic 3D geometries on a timescale of seconds. We implement this approach using holographic patterning of light fields, demonstrate the fabrication of a variety of structures, and study the properties of the light patterns and photosensitive resins required for this fabrication approach. The results indicate that lowabsorbing resins containing ~0.1% photoinitiator, illuminated at modest powers (~10-100more » mW) may be successfully used to build full structures in ~1-10 s.« less
One-step volumetric additive manufacturing of complex polymer structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shusteff, Maxim; Browar, Allison E. M.; Kelly, Brett E.
Two limitations of additive manufacturing methods that arise from layer-based fabrication are slow speed and geometric constraints (which include poor surface quality). Both limitations are overcome in the work reported here, introducing a new volumetric additive fabrication paradigm that produces photopolymer structures with complex non-periodic 3D geometries on a timescale of seconds. We implement this approach using holographic patterning of light fields, demonstrate the fabrication of a variety of structures, and study the properties of the light patterns and photosensitive resins required for this fabrication approach. The results indicate that lowabsorbing resins containing ~0.1% photoinitiator, illuminated at modest powers (~10-100more » mW) may be successfully used to build full structures in ~1-10 s.« less
One-step volumetric additive manufacturing of complex polymer structures
Shusteff, Maxim; Browar, Allison E. M.; Kelly, Brett E.; Henriksson, Johannes; Weisgraber, Todd H.; Panas, Robert M.; Fang, Nicholas X.; Spadaccini, Christopher M.
2017-01-01
Two limitations of additive manufacturing methods that arise from layer-based fabrication are slow speed and geometric constraints (which include poor surface quality). Both limitations are overcome in the work reported here, introducing a new volumetric additive fabrication paradigm that produces photopolymer structures with complex nonperiodic three-dimensional geometries on a time scale of seconds. We implement this approach using holographic patterning of light fields, demonstrate the fabrication of a variety of structures, and study the properties of the light patterns and photosensitive resins required for this fabrication approach. The results indicate that low-absorbing resins containing ~0.1% photoinitiator, illuminated at modest powers (~10 to 100 mW), may be successfully used to build full structures in ~1 to 10 s. PMID:29230437
One-step volumetric additive manufacturing of complex polymer structures.
Shusteff, Maxim; Browar, Allison E M; Kelly, Brett E; Henriksson, Johannes; Weisgraber, Todd H; Panas, Robert M; Fang, Nicholas X; Spadaccini, Christopher M
2017-12-01
Two limitations of additive manufacturing methods that arise from layer-based fabrication are slow speed and geometric constraints (which include poor surface quality). Both limitations are overcome in the work reported here, introducing a new volumetric additive fabrication paradigm that produces photopolymer structures with complex nonperiodic three-dimensional geometries on a time scale of seconds. We implement this approach using holographic patterning of light fields, demonstrate the fabrication of a variety of structures, and study the properties of the light patterns and photosensitive resins required for this fabrication approach. The results indicate that low-absorbing resins containing ~0.1% photoinitiator, illuminated at modest powers (~10 to 100 mW), may be successfully used to build full structures in ~1 to 10 s.
Scientists Shaping the Discussion
NASA Astrophysics Data System (ADS)
Abraham, J. A.; Weymann, R.; Mandia, S. A.; Ashley, M.
2011-12-01
Scientific studies which directly impact the larger society require an engagement between the scientists and the larger public. With respect to research on climate change, many third-party groups report on scientific findings and thereby serve as an intermediary between the scientist and the public. In many cases, the third-party reporting misinterprets the findings and conveys inaccurate information to the media and the public. To remedy this, many scientists are now taking a more active role in conveying their work directly to interested parties. In addition, some scientists are taking the further step of engaging with the general public to answer basic questions related to climate change - even on sub-topics which are unrelated to scientists' own research. Nevertheless, many scientists are reluctant to engage the general public or the media. The reasons for scientific reticence are varied but most commonly are related to fear of public engagement, concern about the time required to properly engage the public, or concerns about the impact to their professional reputations. However, for those scientists who are successful, these engagement activities provide many benefits. Scientists can increase the impact of their work, and they can help society make informed choices on significant issues, such as mitigating global warming. Here we provide some concrete steps that scientists can take to ensure that their public engagement is successful. These steps include: (1) cultivating relationships with reporters, (2) crafting clear, easy to understand messages that summarize their work, (3) relating science to everyday experiences, and (4) constructing arguments which appeal to a wide-ranging audience. With these steps, we show that scientists can efficiently deal with concerns that would otherwise inhibit their public engagement. Various resources will be provided that allow scientists to continue work on these key steps.
White, Jim F; Grisshammer, Reinhard
2010-09-07
Purification of recombinant membrane receptors is commonly achieved by use of an affinity tag followed by an additional chromatography step if required. This second step may exploit specific receptor properties such as ligand binding. However, the effects of multiple purification steps on protein yield and integrity are often poorly documented. We have previously reported a robust two-step purification procedure for the recombinant rat neurotensin receptor NTS1 to give milligram quantities of functional receptor protein. First, histidine-tagged receptors are enriched by immobilized metal affinity chromatography using Ni-NTA resin. Second, remaining contaminants in the Ni-NTA column eluate are removed by use of a subsequent neurotensin column yielding pure NTS1. Whilst the neurotensin column eluate contained functional receptor protein, we observed in the neurotensin column flow-through misfolded NTS1. To investigate the origin of the misfolded receptors, we estimated the amount of functional and misfolded NTS1 at each purification step by radio-ligand binding, densitometry of Coomassie stained SDS-gels, and protein content determination. First, we observed that correctly folded NTS1 suffers damage by exposure to detergent and various buffer compositions as seen by the loss of [(3)H]neurotensin binding over time. Second, exposure to the neurotensin affinity resin generated additional misfolded receptor protein. Our data point towards two ways by which misfolded NTS1 may be generated: Damage by exposure to buffer components and by close contact of the receptor to the neurotensin affinity resin. Because NTS1 in detergent solution is stabilized by neurotensin, we speculate that the occurrence of aggregated receptor after contact with the neurotensin resin is the consequence of perturbations in the detergent belt surrounding the NTS1 transmembrane core. Both effects reduce the yield of functional receptor protein.
Henson, Maile A.; Tucker, Charles J.; Zhao, Meilan; Dudek, Serena M.
2016-01-01
Activity-dependent pruning of synaptic contacts plays a critical role in shaping neuronal circuitry in response to the environment during postnatal brain development. Although there is compelling evidence that shrinkage of dendritic spines coincides with synaptic long-term depression (LTD), and that LTD is accompanied by synapse loss, whether NMDA receptor (NMDAR)-dependent LTD is a required step in the progression toward synapse pruning is still unknown. Using repeated applications of NMDA to induce LTD in dissociated rat neuronal cultures, we found that synapse density, as measured by colocalization of fluorescent markers for pre- and postsynaptic structures, was decreased irrespective of the presynaptic marker used, post-treatment recovery time, and the dendritic location of synapses. Consistent with previous studies, we found that synapse loss could occur without apparent net spine loss or cell death. Furthermore, synapse loss was unlikely to require direct contact with microglia, as the number of these cells was minimal in our culture preparations. Supporting a model by which NMDAR-LTD is required for synapse loss, the effect of NMDA on fluorescence colocalization was prevented by phosphatase and caspase inhibitors. In addition, gene transcription and protein translation also appeared to be required for loss of putative synapses. These data support the idea that NMDAR-dependent LTD is a required step in synapse pruning and contribute to our understanding of the basic mechanisms of this developmental process. PMID:27794462
Pyke, David A.; Chambers, Jeanne C.; Pellant, Mike; Miller, Richard F.; Beck, Jeffrey L.; Doescher, Paul S.; Roundy, Bruce A.; Schupp, Eugene W.; Knick, Steven T.; Brunson, Mark; McIver, James D.
2017-02-14
Sagebrush steppe ecosystems in the United States currently (2016) occur on only about one-half of their historical land area because of changes in land use, urban growth, and degradation of land, including invasions of non-native plants. The existence of many animal species depends on the existence of sagebrush steppe habitat. The greater sage-grouse (Centrocercus urophasianus) depends on large landscapes of intact habitat of sagebrush and perennial grasses for their existence. In addition, other sagebrush-obligate animals have similar requirements and restoration of landscapes for greater sage-grouse also will benefit these animals. Once sagebrush lands are degraded, they may require restoration actions to make those lands viable habitat for supporting sagebrush-obligate animals, livestock, and wild horses, and to provide ecosystem services for humans now and for future generations.When a decision is made on where restoration treatments should be applied, there are a number of site-specific decisions managers face before selecting the appropriate type of restoration. This site-level decision tool for restoration of sagebrush steppe ecosystems is organized in nine steps.Step 1 describes the process of defining site-level restoration objectives.Step 2 describes the ecological site characteristics of the restoration site. This covers soil chemistry and texture, soil moisture and temperature regimes, and the vegetation communities the site is capable of supporting.Step 3 compares the current vegetation to the plant communities associated with the site State and Transition models.Step 4 takes the manager through the process of current land uses and past disturbances that may influence restoration success.Step 5 is a brief discussion of how weather before and after treatments may impact restoration success.Step 6 addresses restoration treatment types and their potential positive and negative impacts on the ecosystem and on habitats, especially for greater sage-grouse. We discuss when passive restoration options may be sufficient and when active restoration may be necessary to achieve restoration objectives.Step 7 addresses decisions regarding post-restoration livestock grazing management.Step 8 addresses monitoring of the restoration; we discuss important aspects associated with implementation monitoring as well as effectiveness monitoring.Step 9 takes the information learned from monitoring to determine how restoration actions in the future might be adapted to improve restoration success.
[Marketability of food supplements - criteria for the legal assessment].
Breitweg-Lehmann, Evelyn
2017-03-01
To be placed on the market legally, food supplements have to meet national and European food law regulations. This is true for all substances used as well as for the labeling on the packaging of and the advertising for food supplements. The food business operator is responsible for its compliance with all regulations. Therefore, in this article, a concise step-by-step assessment is presented, covering all necessary legal requirements to market food supplements. Additionally, all steps are visualized in a flow chart. All vitamins, minerals and other substances used have to meet the legal conditions. Food business operators have to make sure that their products do not contain medicinal ingredients based on their pharmacologic effect. It is prohibited to place medicinal products as food supplements on the market. Furthermore, food business operators have to make sure that their products are not non-authorized novel foods according to the novel food regulation (EC) no. 258/97. Also, food supplements have to meet the requirements of article 14 of Regulation (EC) No. 178/2002 concerning the safety of foodstuff. Food shall not be placed on the market if it is unsafe. For food supplements that fail the German food-related legal standards but are legally manufactured in another EU member state or are legally put into circulation, the importer requires the so-called general disposition, which must be applied for at the BVL according to § 54 of the German Food and Feed Act. Another possibility for food which fails to meet German food law is to apply for a certificate of exemption according to § 68 of the Food and Feed Act. The food business operator has to meet the harmonized regulations concerning maximum and minimum levels of additives, flavors and enzymes. The packaging has to meet the compulsory labeling as well the voluntary labeling, like health claims. The BVL is also the relevant authority for other tasks concerning food supplements. A figure shows all notifications since 2005 of food supplements in Germany at the BVL. Additionally, an overview for notifications in the rapid alert system for food and feed concerning food supplements is given as well as a brief introduction into the survey of food supplements marketed on the internet.
2016-11-01
DEFENSE INTELLIGENCE Additional Steps Could Better Integrate Intelligence Input into DOD’s Acquisition of Major Weapon...States Government Accountability Office Highlights of GAO-17-10, a report to congressional committees November 2016 DEFENSE INTELLIGENCE ...Additional Steps Could Better Integrate Intelligence Input into DOD’s Acquisition of Major Weapon Systems What GAO Found The Department of Defense (DOD
Kraft, Timothy W.
2016-01-01
Purpose To examine the predictions of alternative models for the stochastic shut-off of activated rhodopsin (R*) and their implications for the interpretation of experimentally recorded single-photon responses (SPRs) in mammalian rods. Theory We analyze the transitions that an activated R* molecule undergoes as a result of successive phosphorylation steps and arrestin binding. We consider certain simplifying cases for the relative magnitudes of the reaction rate constants and derive the probability distributions for the time to arrestin binding. In addition to the conventional model in which R* catalytic activity declines in a graded manner with successive phosphorylations, we analyze two cases in which the activity is assumed to occur not via multiple small steps upon each phosphorylation but via a single large step. We refer to these latter two cases as the binary R* shut-off and three-state R* shut-off models. Methods We simulate R*’s stochastic reactions numerically for the three models. In the simplifying cases for the ratio of rate constants in the binary and three-state models, we show that the probability distribution of the time to arrestin binding is accurately predicted. To simulate SPRs, we then integrate the differential equations for the downstream reactions using a standard model of the rod outer segment that includes longitudinal diffusion of cGMP and Ca2+. Results Our simulations of SPRs in the conventional model of graded shut-off of R* conform closely to the simulations in a recent study. However, the gain factor required to account for the observed mean SPR amplitude is higher than can be accounted for from biochemical experiments. In addition, a substantial minority of the simulated SPRs exhibit features that have not been reported in published experiments. Our simulations of SPRs using the model of binary R* shut-off appear to conform closely to experimental results for wild type (WT) mouse rods, and the required gain factor conforms to biochemical expectations. However, for the arrestin knockout (Arr−/−) phenotype, the predictions deviated from experimental findings and led us to invoke a low-activity state that R* enters before arrestin binding. Our simulations of this three-state R* shut-off model are very similar to those of the binary model in the WT case but are preferred because they appear to accurately predict the mean SPRs for four mutant phenotypes, Arr+/−, Arr−/−, GRK1+/−, and GRK1−/−, in addition to the WT phenotype. When we additionally treated the formation and shut-off of activated phosphodiesterase (E*) as stochastic, the simulated SPRs appeared even more similar to real SPRs, and there was very little change in the ensemble mean and standard deviation or in the amplitude distribution. Conclusions We conclude that the conventional model of graded reduction in R* activity through successive phosphorylation steps appears to be inconsistent with experimental results. Instead, we find that two variants of a model in which R* activity initially remains high and then declines abruptly after several phosphorylation steps appears capable of providing a better description of experimentally measured SPRs. PMID:27375353
Inhibitors of COP-mediated Transport and Cholera Toxin Action Inhibit Simian Virus 40 Infection
Richards, Ayanthi A.; Stang, Espen; Pepperkok, Rainer; Parton, Robert G.
2002-01-01
Simian virus 40 (SV40) is a nonenveloped virus that has been shown to pass from surface caveolae to the endoplasmic reticulum in an apparently novel infectious entry pathway. We now show that the initial entry step is blocked by brefeldin A and by incubation at 20°C. Subsequent to the entry step, the virus reaches a domain of the rough endoplasmic reticulum by an unknown pathway. This intracellular trafficking pathway is also brefeldin A sensitive. Infection is strongly inhibited by expression of GTP-restricted ADP-ribosylation factor 1 (Arf1) and Sar1 mutants and by microinjection of antibodies to βCOP. In addition, we demonstrate a potent inhibition of SV40 infection by the dipeptide N-benzoyl-oxycarbonyl-Gly-Phe-amide, which also inhibits late events in cholera toxin action. Our results identify novel inhibitors of SV40 infection and show that SV40 requires COPI- and COPII-dependent transport steps for successful infection. PMID:12006667
A Cross-Sectional Comparison of Army Advertising Attributes
1990-11-01
ARMY ADVERTISING ATTRIBUTES EXECUTIVE SUMMARY Research Requirement: In order to assess the impact of the Army’s advertising strategy and campaigns...Sample sizes varied from 4,875 to 4,926 for the NRS and from 3,569 to 3,602 for ACOMS. Improvement type themes. This advertising strategy would make...and college. I recommend that the Army focus advertising strategy on the Army as a positive step between high school and college in addition to work
Rep. Lamborn, Doug [R-CO-5
2014-11-18
House - 11/18/2014 Referred to the Committee on Oversight and Government Reform, and in addition to the Committee on Foreign Affairs, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the... (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Evaluation of Military Fuels Using a Ford 6.7L Powerstroke Diesel Engine
2011-08-01
natural steady state values during idle testing steps. Engine oil cooler plumbing was factory integrated to the engine water jacket, thus not...Innospec Fuel Specialties DCI-4A. Per QPL-25017, the minimum effective treat rate of DCI-4A required an additive concentration level of 9ppm in the...dynamometer was used to control engine speed and dissipate load. Engine load was manipulated through the actuation of the engine throttle pedal assembly
2013-02-27
signed certification by supervisors that they understand and intend to comply with reporting policy. Recent psychological research suggests that the...intend to comply with reporting policy. Recent psychological research suggests that the additional step of requiring a signed acknowledgment may make...prepared by the Defense Personnel Security Research Center (PERSEREC) as part of an effort to design and pilot test the proposed system. This report was
Rep. McDermott, Jim [D-WA-7
2014-01-09
House - 01/09/2014 Referred to the Committee on Energy and Commerce, and in addition to the Committee on Ways and Means, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee... (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
3D Printing In Zero-G ISS Technology Demonstration
NASA Technical Reports Server (NTRS)
Werkheiser, Niki; Cooper, Kenneth; Edmunson, Jennifer; Dunn, Jason; Snyder, Michael
2014-01-01
The National Aeronautics and Space Administration (NASA) has a long term strategy to fabricate components and equipment on-demand for manned missions to the Moon, Mars, and beyond. To support this strategy, NASA and Made in Space, Inc. are developing the 3D Printing In Zero-G payload as a Technology Demonstration for the International Space Station (ISS). The 3D Printing In Zero-G experiment ('3D Print') will be the first machine to perform 3D printing in space. The greater the distance from Earth and the longer the mission duration, the more difficult resupply becomes; this requires a change from the current spares, maintenance, repair, and hardware design model that has been used on the International Space Station (ISS) up until now. Given the extension of the ISS Program, which will inevitably result in replacement parts being required, the ISS is an ideal platform to begin changing the current model for resupply and repair to one that is more suitable for all exploration missions. 3D Printing, more formally known as Additive Manufacturing, is the method of building parts/objects/tools layer-by-layer. The 3D Print experiment will use extrusion-based additive manufacturing, which involves building an object out of plastic deposited by a wire-feed via an extruder head. Parts can be printed from data files loaded on the device at launch, as well as additional files uplinked to the device while on-orbit. The plastic extrusion additive manufacturing process is a low-energy, low-mass solution to many common needs on board the ISS. The 3D Print payload will serve as the ideal first step to proving that process in space. It is unreasonable to expect NASA to launch large blocks of material from which parts or tools can be traditionally machined, and even more unreasonable to fly up multiple drill bits that would be required to machine parts from aerospace-grade materials such as titanium 6-4 alloy and Inconel. The technology to produce parts on demand, in space, offers unique design options that are not possible through traditional manufacturing methods while offering cost-effective, high-precision, low-unit on-demand manufacturing. Thus, Additive Manufacturing capabilities are the foundation of an advanced manufacturing in space roadmap. The 3D Printing In Zero-G experiment will demonstrate the capability of utilizing Additive Manufacturing technology in space. This will serve as the enabling first step to realizing an additive manufacturing, print-on-demand "machine shop" for long-duration missions and sustaining human exploration of other planets, where there is extremely limited ability and availability of Earth-based logistics support. Simply put, Additive Manufacturing in space is a critical enabling technology for NASA. It will provide the capability to produce hardware on-demand, directly lowering cost and decreasing risk by having the exact part or tool needed in the time it takes to print. This capability will also provide the much-needed solution to the cost, volume, and up-mass constraints that prohibit launching everything needed for long-duration or long-distance missions from Earth, including spare parts and replacement systems. A successful mission for the 3D Printing In Zero-G payload is the first step to demonstrate the capability of printing on orbit. The data gathered and lessons learned from this demonstration will be applied to the next generation of additive manufacturing technology on orbit. It is expected that Additive Manufacturing technology will quickly become a critical part of any mission's infrastructure.
Koziel, David; Michaelis, Uwe; Kruse, Tobias
2018-08-01
Endotoxins contaminate proteins that are produced in E. coli. High levels of endotoxins can influence cellular assays and cause severe adverse effects when administered to humans. Thus, endotoxin removal is important in protein purification for academic research and in GMP manufacturing of biopharmaceuticals. Several methods exist to remove endotoxin, but often require additional downstream-processing steps, decrease protein yield and are costly. These disadvantages can be avoided by using an integrated endotoxin depletion (iED) wash-step that utilizes Triton X-114 (TX114). In this paper, we show that the iED wash-step is broadly applicable in most commonly used chromatographies: it reduces endotoxin by a factor of 10 3 to 10 6 during NiNTA-, MBP-, SAC-, GST-, Protein A and CEX-chromatography but not during AEX or HIC-chromatography. We characterized the iED wash-step using Design of Experiments (DoE) and identified optimal experimental conditions for application scenarios that are relevant to academic research or industrial GMP manufacturing. A single iED wash-step with 0.75% (v/v) TX114 added to the feed and wash buffer can reduce endotoxin levels to below 2 EU/ml or deplete most endotoxin while keeping the manufacturing costs as low as possible. The comprehensive characterization enables academia and industry to widely adopt the iED wash-step for a routine, efficient and cost-effective depletion of endotoxin during protein purification at any scale. Copyright © 2018. Published by Elsevier B.V.
Glass, Jennifer B.; Orphan, Victoria J.
2011-01-01
Fluxes of greenhouse gases to the atmosphere are heavily influenced by microbiological activity. Microbial enzymes involved in the production and consumption of greenhouse gases often contain metal cofactors. While extensive research has examined the influence of Fe bioavailability on microbial CO2 cycling, fewer studies have explored metal requirements for microbial production and consumption of the second- and third-most abundant greenhouse gases, methane (CH4), and nitrous oxide (N2O). Here we review the current state of biochemical, physiological, and environmental research on transition metal requirements for microbial CH4 and N2O cycling. Methanogenic archaea require large amounts of Fe, Ni, and Co (and some Mo/W and Zn). Low bioavailability of Fe, Ni, and Co limits methanogenesis in pure and mixed cultures and environmental studies. Anaerobic methane oxidation by anaerobic methanotrophic archaea (ANME) likely occurs via reverse methanogenesis since ANME possess most of the enzymes in the methanogenic pathway. Aerobic CH4 oxidation uses Cu or Fe for the first step depending on Cu availability, and additional Fe, Cu, and Mo for later steps. N2O production via classical anaerobic denitrification is primarily Fe-based, whereas aerobic pathways (nitrifier denitrification and archaeal ammonia oxidation) require Cu in addition to, or possibly in place of, Fe. Genes encoding the Cu-containing N2O reductase, the only known enzyme capable of microbial N2O conversion to N2, have only been found in classical denitrifiers. Accumulation of N2O due to low Cu has been observed in pure cultures and a lake ecosystem, but not in marine systems. Future research is needed on metalloenzymes involved in the production of N2O by enrichment cultures of ammonia oxidizing archaea, biological mechanisms for scavenging scarce metals, and possible links between metal bioavailability and greenhouse gas fluxes in anaerobic environments where metals may be limiting due to sulfide-metal scavenging. PMID:22363333
pH-Controlled two-step uncoating of influenza virus.
Li, Sai; Sieben, Christian; Ludwig, Kai; Höfer, Chris T; Chiantia, Salvatore; Herrmann, Andreas; Eghiaian, Frederic; Schaap, Iwan A T
2014-04-01
Upon endocytosis in its cellular host, influenza A virus transits via early to late endosomes. To efficiently release its genome, the composite viral shell must undergo significant structural rearrangement, but the exact sequence of events leading to viral uncoating remains largely speculative. In addition, no change in viral structure has ever been identified at the level of early endosomes, raising a question about their role. We performed AFM indentation on single viruses in conjunction with cellular assays under conditions that mimicked gradual acidification from early to late endosomes. We found that the release of the influenza genome requires sequential exposure to the pH of both early and late endosomes, with each step corresponding to changes in the virus mechanical response. Step 1 (pH 7.5-6) involves a modification of both hemagglutinin and the viral lumen and is reversible, whereas Step 2 (pH <6.0) involves M1 dissociation and major hemagglutinin conformational changes and is irreversible. Bypassing the early-endosomal pH step or blocking the envelope proton channel M2 precludes proper genome release and efficient infection, illustrating the importance of viral lumen acidification during the early endosomal residence for influenza virus infection. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Number development and developmental dyscalculia.
von Aster, Michael G; Shalev, Ruth S
2007-11-01
There is a growing consensus that the neuropsychological underpinnings of developmental dyscalculia (DD) are a genetically determined disorder of 'number sense', a term denoting the ability to represent and manipulate numerical magnitude nonverbally on an internal number line. However, this spatially-oriented number line develops during elementary school and requires additional cognitive components including working memory and number symbolization (language). Thus, there may be children with familial-genetic DD with deficits limited to number sense and others with DD and comorbidities such as language delay, dyslexia, or attention-deficit-hyperactivity disorder. This duality is supported by epidemiological data indicating that two-thirds of children with DD have comorbid conditions while one-third have pure DD. Clinically, they differ according to their profile of arithmetic difficulties. fMRI studies indicate that parietal areas (important for number functions), and frontal regions (dominant for executive working memory and attention functions), are under-activated in children with DD. A four-step developmental model that allows prediction of different pathways for DD is presented. The core-system representation of numerical magnitude (cardinality; step 1) provides the meaning of 'number', a precondition to acquiring linguistic (step 2), and Arabic (step 3) number symbols, while a growing working memory enables neuroplastic development of an expanding mental number line during school years (step 4). Therapeutic and educational interventions can be drawn from this model.
Postprocessing Algorithm for Driving Conventional Scanning Tunneling Microscope at Fast Scan Rates
Zhang, Hao; Li, Xianqi; Park, Jewook; Li, An-Ping
2017-01-01
We present an image postprocessing framework for Scanning Tunneling Microscope (STM) to reduce the strong spurious oscillations and scan line noise at fast scan rates and preserve the features, allowing an order of magnitude increase in the scan rate without upgrading the hardware. The proposed method consists of two steps for large scale images and four steps for atomic scale images. For large scale images, we first apply for each line an image registration method to align the forward and backward scans of the same line. In the second step we apply a “rubber band” model which is solved by a novel Constrained Adaptive and Iterative Filtering Algorithm (CIAFA). The numerical results on measurement from copper(111) surface indicate the processed images are comparable in accuracy to data obtained with a slow scan rate, but are free of the scan drift error commonly seen in slow scan data. For atomic scale images, an additional first step to remove line-by-line strong background fluctuations and a fourth step of replacing the postprocessed image by its ranking map as the final atomic resolution image are required. The resulting image restores the lattice image that is nearly undetectable in the original fast scan data. PMID:29362664
The "Motor" in Implicit Motor Sequence Learning: A Foot-stepping Serial Reaction Time Task.
Du, Yue; Clark, Jane E
2018-05-03
This protocol describes a modified serial reaction time (SRT) task used to study implicit motor sequence learning. Unlike the classic SRT task that involves finger-pressing movements while sitting, the modified SRT task requires participants to step with both feet while maintaining a standing posture. This stepping task necessitates whole body actions that impose postural challenges. The foot-stepping task complements the classic SRT task in several ways. The foot-stepping SRT task is a better proxy for the daily activities that require ongoing postural control, and thus may help us better understand sequence learning in real-life situations. In addition, response time serves as an indicator of sequence learning in the classic SRT task, but it is unclear whether response time, reaction time (RT) representing mental process, or movement time (MT) reflecting the movement itself, is a key player in motor sequence learning. The foot-stepping SRT task allows researchers to disentangle response time into RT and MT, which may clarify how motor planning and movement execution are involved in sequence learning. Lastly, postural control and cognition are interactively related, but little is known about how postural control interacts with learning motor sequences. With a motion capture system, the movement of the whole body (e.g., the center of mass (COM)) can be recorded. Such measures allow us to reveal the dynamic processes underlying discrete responses measured by RT and MT, and may aid in elucidating the relationship between postural control and the explicit and implicit processes involved in sequence learning. Details of the experimental set-up, procedure, and data processing are described. The representative data are adopted from one of our previous studies. Results are related to response time, RT, and MT, as well as the relationship between the anticipatory postural response and the explicit processes involved in implicit motor sequence learning.
Writing Electron Dot Structures: Abstract of Issue 9905M
NASA Astrophysics Data System (ADS)
Magnell, Kenneth R.
1999-10-01
Writing Electron Dot Structures is a computer program for Mac OS that provides drill with feedback for students learning to write electron dot structures. While designed for students in the first year of college general chemistry it may also be used by high school chemistry students. A systematic method similar to that found in many general chemistry texts is employed:
Screens from Writing Electron Dot Structures Hardware and Software Requirements
Hardware and software requirements for Writing Electron Dot Structures are shown in Table 1. Ordering and Information Journal of Chemical Education Software (or JCE Software) is a publication of the Journal of Chemical Education. There is an order form inserted in this issue that provides prices and other ordering information. If this card is not available or if you need additional information, contact: JCE Software, University of WisconsinMadison, 1101 University Avenue, Madison, WI 53706-1396; phone; 608/262-5153 or 800/991-5534; fax: 608/265-8094; email: jcesoft@chem.wisc.edu. Information about all of our publications (including abstracts, descriptions, updates) is available from our World Wide Web site at: http://JChemEd.chem.wisc.edu/JCESoft/
A three-image algorithm for hard x-ray grating interferometry.
Pelliccia, Daniele; Rigon, Luigi; Arfelli, Fulvia; Menk, Ralf-Hendrik; Bukreeva, Inna; Cedola, Alessia
2013-08-12
A three-image method to extract absorption, refraction and scattering information for hard x-ray grating interferometry is presented. The method comprises a post-processing approach alternative to the conventional phase stepping procedure and is inspired by a similar three-image technique developed for analyzer-based x-ray imaging. Results obtained with this algorithm are quantitatively comparable with phase-stepping. This method can be further extended to samples with negligible scattering, where only two images are needed to separate absorption and refraction signal. Thanks to the limited number of images required, this technique is a viable route to bio-compatible imaging with x-ray grating interferometer. In addition our method elucidates and strengthens the formal and practical analogies between grating interferometry and the (non-interferometric) diffraction enhanced imaging technique.
Scenarios for exercising technical approaches to verified nuclear reductions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doyle, James
2010-01-01
Presidents Obama and Medvedev in April 2009 committed to a continuing process of step-by-step nuclear arms reductions beyond the new START treaty that was signed April 8, 2010 and to the eventual goal of a world free of nuclear weapons. In addition, the US Nuclear Posture review released April 6, 2010 commits the US to initiate a comprehensive national research and development program to support continued progress toward a world free of nuclear weapons, including expanded work on verification technologies and the development of transparency measures. It is impossible to predict the specific directions that US-RU nuclear arms reductions willmore » take over the 5-10 years. Additional bilateral treaties could be reached requiring effective verification as indicated by statements made by the Obama administration. There could also be transparency agreements or other initiatives (unilateral, bilateral or multilateral) that require monitoring with a standard of verification lower than formal arms control, but still needing to establish confidence to domestic, bilateral and multilateral audiences that declared actions are implemented. The US Nuclear Posture Review and other statements give some indication of the kinds of actions and declarations that may need to be confirmed in a bilateral or multilateral setting. Several new elements of the nuclear arsenals could be directly limited. For example, it is likely that both strategic and nonstrategic nuclear warheads (deployed and in storage), warhead components, and aggregate stocks of such items could be accountable under a future treaty or transparency agreement. In addition, new initiatives or agreements may require the verified dismantlement of a certain number of nuclear warheads over a specified time period. Eventually procedures for confirming the elimination of nuclear warheads, components and fissile materials from military stocks will need to be established. This paper is intended to provide useful background information for establishing a conceptual approach to a five-year technical program plan for research and development of nuclear arms reductions verification and transparency technologies and procedures.« less
Li, Zuohua; Cui, Yanhui; Chen, Jun; Deng, Lianlin
2016-01-01
Binary transition metal oxides have been regarded as one of the most promising candidates for high-performance electrodes in energy storage devices, since they can offer high electrochemical activity and high capacity. Rational designing nanosized metal oxide/carbon composite architectures has been proven to be an effective way to improve the electrochemical performance. In this work, the (Co,Mn)3O4 spinel was synthesized and anchored on reduced graphene oxide (rGO) nanosheets using a facile and single hydrothermal step with H2O2 as additive, no further additional calcination required. Analysis showed that this method gives a mixed spinel, i.e. (Co,Mn)3O4, having 2+ and 3+ Co and Mn ions in both the octahedral and tetrahedral sites of the spinel structure, with a nanocubic morphology roughly 20 nm in size. The nanocubes are bound onto the rGO nanosheet uniformly in a single hydrothermal process, then the as-prepared (Co,Mn)3O4/rGO composite was characterized as the anode materials for Li-ion battery (LIB). It can deliver 1130.6 mAh g-1 at current density of 100 mA g-1 with 98% of coulombic efficiency after 140 cycles. At 1000 mA g-1, the capacity can still maintain 750 mAh g-1, demonstrating excellent rate capabilities. Therefore, the one-step process is a facile and promising method to fabricate metal oxide/rGO composite materials for energy storage applications. PMID:27788161
Li, Zuohua; Cui, Yanhui; Chen, Jun; Deng, Lianlin; Wu, Junwei
2016-01-01
Binary transition metal oxides have been regarded as one of the most promising candidates for high-performance electrodes in energy storage devices, since they can offer high electrochemical activity and high capacity. Rational designing nanosized metal oxide/carbon composite architectures has been proven to be an effective way to improve the electrochemical performance. In this work, the (Co,Mn)3O4 spinel was synthesized and anchored on reduced graphene oxide (rGO) nanosheets using a facile and single hydrothermal step with H2O2 as additive, no further additional calcination required. Analysis showed that this method gives a mixed spinel, i.e. (Co,Mn)3O4, having 2+ and 3+ Co and Mn ions in both the octahedral and tetrahedral sites of the spinel structure, with a nanocubic morphology roughly 20 nm in size. The nanocubes are bound onto the rGO nanosheet uniformly in a single hydrothermal process, then the as-prepared (Co,Mn)3O4/rGO composite was characterized as the anode materials for Li-ion battery (LIB). It can deliver 1130.6 mAh g-1 at current density of 100 mA g-1 with 98% of coulombic efficiency after 140 cycles. At 1000 mA g-1, the capacity can still maintain 750 mAh g-1, demonstrating excellent rate capabilities. Therefore, the one-step process is a facile and promising method to fabricate metal oxide/rGO composite materials for energy storage applications.
Pyroprocessing of Light Water Reactor Spent Fuels Based on an Electrochemical Reduction Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohta, Hirokazu; Inoue, Tadashi; Sakamura, Yoshiharu
A concept of pyroprocessing light water reactor (LWR) spent fuels based on an electrochemical reduction technology is proposed, and the material balance of the processing of mixed oxide (MOX) or high-burnup uranium oxide (UO{sub 2}) spent fuel is evaluated. Furthermore, a burnup analysis for metal fuel fast breeder reactors (FBRs) is conducted on low-decontamination materials recovered by pyroprocessing. In the case of processing MOX spent fuel (40 GWd/t), UO{sub 2} is separately collected for {approx}60 wt% of the spent fuel in advance of the electrochemical reduction step, and the product recovered through the rare earth (RE) removal step, which hasmore » the composition uranium:plutonium:minor actinides:fission products (FPs) = 76.4:18.4:1.7:3.5, can be applied as an ingredient of FBR metal fuel without a further decontamination process. On the other hand, the electroreduced alloy of high-burnup UO{sub 2} spent fuel (48 GWd/t) requires further decontamination of residual FPs by an additional process such as electrorefining even if RE FPs are removed from the alloy because the recovered plutonium (Pu) is accompanied by almost the same amount of FPs in addition to RE. However, the amount of treated materials in the electrorefining step is reduced to {approx}10 wt% of the total spent fuel owing to the prior UO{sub 2} recovery step. These results reveal that the application of electrochemical reduction technology to LWR spent oxide fuel is a promising concept for providing FBR metal fuel by a rationalized process.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durham, M.D.
The purpose of this research program is to identify and evaluate a variety of additives capable of increasing particle cohesion which could be used for improving collection efficiency in an ESP. A three-phase screening process will be used to provide the, evaluation of many additives in a logical and cost-effective manner. The three step approach involves the following experimental setups: 1. Provide a preliminary screening in the laboratory by measuring the effects of various conditioning agents on reentrainment of flyash particles in an electric field operating at simulated flue gas conditions. 2. Evaluate the successful additives using a 100 acfmmore » bench-scale ESP operating on actual flue gas. 3. Obtain the data required for scaling up the technology by testing the two or three most promising conditioning agents at the pilot scale.« less
Hachisuka, Shin-Ichi; Sato, Takaaki; Atomi, Haruyuki
2018-06-01
Many organisms possess pathways that regenerate NAD + from its degradation products, and two pathways are known to salvage NAD + from nicotinamide (Nm). One is a four-step pathway that proceeds through deamination of Nm to nicotinic acid (Na) by Nm deamidase and phosphoribosylation to nicotinic acid mononucleotide (NaMN), followed by adenylylation and amidation. Another is a two-step pathway that does not involve deamination and directly proceeds with the phosphoribosylation of Nm to nicotinamide mononucleotide (NMN), followed by adenylylation. Judging from genome sequence data, the hyperthermophilic archaeon Thermococcus kodakarensis is supposed to utilize the four-step pathway, but the fact that the adenylyltransferase encoded by TK0067 recognizes both NMN and NaMN also raises the possibility of a two-step salvage mechanism. Here, we examined the substrate specificity of the recombinant TK1676 protein, annotated as nicotinic acid phosphoribosyltransferase. The TK1676 protein displayed significant activity toward Na and phosphoribosyl pyrophosphate (PRPP) and only trace activity with Nm and PRPP. We further performed genetic analyses on TK0218 (quinolinic acid phosphoribosyltransferase) and TK1650 (Nm deamidase), involved in de novo biosynthesis and four-step salvage of NAD + , respectively. The ΔTK0218 mutant cells displayed growth defects in a minimal synthetic medium, but growth was fully restored with the addition of Na or Nm. The ΔTK0218 ΔTK1650 mutant cells did not display growth in the minimal medium, and growth was restored with the addition of Na but not Nm. The enzymatic and genetic analyses strongly suggest that NAD + salvage in T. kodakarensis requires deamination of Nm and proceeds through the four-step pathway. IMPORTANCE Hyperthermophiles must constantly deal with increased degradation rates of their biomolecules due to their high growth temperatures. Here, we identified the pathway that regenerates NAD + from nicotinamide (Nm) in the hyperthermophilic archaeon Thermococcus kodakarensis The organism utilizes a four-step pathway that initially hydrolyzes the amide bond of Nm to generate nicotinic acid (Na), followed by phosphoribosylation, adenylylation, and amidation. Although the two-step pathway, consisting of only phosphoribosylation of Nm and adenylylation, seems to be more efficient, Nm mononucleotide in the two-step pathway is much more thermolabile than Na mononucleotide, the corresponding intermediate in the four-step pathway. Although NAD + itself is thermolabile, this may represent an example of a metabolism that has evolved to avoid the use of thermolabile intermediates. Copyright © 2018 American Society for Microbiology.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Hixon, Duane
1991-01-01
Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.
Fire-Retardant, Self-Extinguishing Inorganic/Polymer Composite Memory Foams.
Chatterjee, Soumyajyoti; Shanmuganathan, Kadhiravan; Kumaraswamy, Guruswamy
2017-12-27
Polymeric foams used in furniture and automotive and aircraft seating applications rely on the incorporation of environmentally hazardous fire-retardant additives to meet fire safety norms. This has occasioned significant interest in novel approaches to the elimination of fire-retardant additives. Foams based on polymer nanocomposites or based on fire-retardant coatings show compromised mechanical performance and require additional processing steps. Here, we demonstrate a one-step preparation of a fire-retardant ice-templated inorganic/polymer hybrid that does not incorporate fire-retardant additives. The hybrid foams exhibit excellent mechanical properties. They are elastic to large compressional strain, despite the high inorganic content. They also exhibit tunable mechanical recovery, including viscoelastic "memory". These hybrid foams are prepared using ice-templating that relies on a green solvent, water, as a porogen. Because these foams are predominantly comprised of inorganic components, they exhibit exceptional fire retardance in torch burn tests and are self-extinguishing. After being subjected to a flame, the foam retains its porous structure and does not drip or collapse. In micro-combustion calorimetry, the hybrid foams show a peak heat release rate that is only 25% that of a commercial fire-retardant polyurethanes. Finally, we demonstrate that we can use ice-templating to prepare hybrid foams with different inorganic colloids, including cheap commercial materials. We also demonstrate that ice-templating is amenable to scale up, without loss of mechanical performance or fire-retardant properties.
NASA Astrophysics Data System (ADS)
Sidiropoulos, Panagiotis; Muller, Jan-Peter; Watson, Gillian; Michael, Gregory; Walter, Sebastian
2018-02-01
This work presents the coregistered, orthorectified and mosaiced high-resolution products of the MC11 quadrangle of Mars, which have been processed using novel, fully automatic, techniques. We discuss the development of a pipeline that achieves fully automatic and parameter independent geometric alignment of high-resolution planetary images, starting from raw input images in NASA PDS format and following all required steps to produce a coregistered geotiff image, a corresponding footprint and useful metadata. Additionally, we describe the development of a radiometric calibration technique that post-processes coregistered images to make them radiometrically consistent. Finally, we present a batch-mode application of the developed techniques over the MC11 quadrangle to validate their potential, as well as to generate end products, which are released to the planetary science community, thus assisting in the analysis of Mars static and dynamic features. This case study is a step towards the full automation of signal processing tasks that are essential to increase the usability of planetary data, but currently, require the extensive use of human resources.
Ongoing Development of a Series Bosch Reactor System
NASA Technical Reports Server (NTRS)
Abney, Morgan B; Mansell, J. Matthew; Stanley, Christine; Edmunson, Jennifer; DuMez, Samuel J.; Chen, Kevin
2013-01-01
Future manned missions to deep space or planetary surfaces will undoubtedly incorporate highly robust, efficient, and regenerable life support systems that require minimal consumables. To meet this requirement, NASA continues to explore a Bosch-based carbon dioxide reduction system to recover oxygen from CO2. In order to improve the equivalent system mass of Bosch systems, we seek to design and test a "Series Bosch" system in which two reactors in series are optimized for the two steps of the reaction, as well as to explore the use of in situ materials as carbon deposition catalysts. Here we report recent developments in this effort including assembly and initial testing of a Reverse Water-Gas Shift reactor (RWGSr) and initial testing of two gas separation membranes. The RWGSr was sized to reduce CO2 produced by a crew of four to carbon monoxide as the first stage in a Series Bosch system. The gas separation membranes, necessary to recycle unreacted hydrogen and CO2, were similarly sized. Additionally, we report results of preliminary experiments designed to determine the catalytic properties of Martian regolith simulant for the carbon formation step.
The Bacillus subtilis GntR family repressor YtrA responds to cell wall antibiotics.
Salzberg, Letal I; Luo, Yun; Hachmann, Anna-Barbara; Mascher, Thorsten; Helmann, John D
2011-10-01
The transglycosylation step of cell wall synthesis is a prime antibiotic target because it is essential and specific to bacteria. Two antibiotics, ramoplanin and moenomycin, target this step by binding to the substrate lipid II and the transglycosylase enzyme, respectively. Here, we compare the ramoplanin and moenomycin stimulons in the Gram-positive model organism Bacillus subtilis. Ramoplanin strongly induces the LiaRS two-component regulatory system, while moenomycin almost exclusively induces genes that are part of the regulon of the extracytoplasmic function (ECF) σ factor σ(M). Ramoplanin additionally induces the ytrABCDEF and ywoBCD operons, which are not part of a previously characterized antibiotic-responsive regulon. Cluster analysis reveals that these two operons are selectively induced by a subset of cell wall antibiotics that inhibit lipid II function or recycling. Repression of both operons requires YtrA, which recognizes an inverted repeat in front of its own operon and in front of ywoB. These results suggest that YtrA is an additional regulator of cell envelope stress responses.
The Bacillus subtilis GntR Family Repressor YtrA Responds to Cell Wall Antibiotics▿§
Salzberg, Letal I.; Luo, Yun; Hachmann, Anna-Barbara; Mascher, Thorsten; Helmann, John D.
2011-01-01
The transglycosylation step of cell wall synthesis is a prime antibiotic target because it is essential and specific to bacteria. Two antibiotics, ramoplanin and moenomycin, target this step by binding to the substrate lipid II and the transglycosylase enzyme, respectively. Here, we compare the ramoplanin and moenomycin stimulons in the Gram-positive model organism Bacillus subtilis. Ramoplanin strongly induces the LiaRS two-component regulatory system, while moenomycin almost exclusively induces genes that are part of the regulon of the extracytoplasmic function (ECF) σ factor σM. Ramoplanin additionally induces the ytrABCDEF and ywoBCD operons, which are not part of a previously characterized antibiotic-responsive regulon. Cluster analysis reveals that these two operons are selectively induced by a subset of cell wall antibiotics that inhibit lipid II function or recycling. Repression of both operons requires YtrA, which recognizes an inverted repeat in front of its own operon and in front of ywoB. These results suggest that YtrA is an additional regulator of cell envelope stress responses. PMID:21856850
Khairallah, George N; da Silva, Gabriel; O'Hair, Richard A J
2014-10-06
A combination of gas-phase ion-molecule reaction experiments and theoretical kinetic modeling is used to examine how a salt can influence the kinetic basicity of organometallates reacting with water. [HC≡CLiCl](-) reacts with water more rapidly than [HC≡CMgCl2](-), consistent with the higher reactivity of organolithium versus organomagnesium reagents. Addition of LiCl to [HC≡CLiCl](-) or [HC≡CMgCl2](-) enhances their reactivity towards water by a factor of about 2, while addition of MgCl2 to [HC≡CMgCl2](-) enhances its reactivity by a factor of about 4. Ab initio calculations coupled with master equation/RRKM theory kinetic modeling show that these reactions proceed via a mechanism involving formation of a water adduct followed by rearrangement, proton transfer, and acetylene elimination as either discrete or concerted steps. Both the energy and entropy requirements for these elementary steps need to be considered in order to explain the observed kinetics. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Rep. Crawford, Eric A. "Rick" [R-AR-1
2014-02-10
House - 02/10/2014 Referred to the Committee on Ways and Means, and in addition to the Committee on Rules, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
A Cost Model for Testing Unmanned and Autonomous Systems of Systems
2011-02-01
those risks. In addition, the fundamental methods presented by Aranha and Borba to include the complexity and sizing of tests for UASoS, can be expanded...used as an input for test execution effort estimation models (Aranha & Borba , 2007). Such methodology is very relevant to this work because as a UASoS...calculate the test effort based on the complexity of the SoS. However, Aranha and Borba define test size as the number of steps required to complete
Rep. Blumenauer, Earl [D-OR-3
2018-06-13
House - 06/13/2018 Referred to the Committee on House Administration, and in addition to the Committee on Science, Space, and Technology, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of... (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Rep. Weber, Randy K., Sr. [R-TX-14
2018-05-21
House - 05/21/2018 Referred to the Committee on Transportation and Infrastructure, and in addition to the Committee on Natural Resources, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of... (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Rep. Garamendi, John [D-CA-3
2018-05-21
House - 05/21/2018 Referred to the Committee on Energy and Commerce, and in addition to the Committee on Foreign Affairs, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee... (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Sensitivity and specificity: twin goals of proteomics assays. Can they be combined?
Wilson, Robert
2013-04-01
A major ambition of proteomics is the provision of assays that can diagnose disease and monitor therapies. These assays are required to be sensitive and specific for individual proteins, and in most cases to quantify more than one protein in the same sample. The two main technologies currently used for proteomics assays are based on mass spectrometry and panels of affinity molecules such as antibodies. In the first part of this review the most sensitive existing assays based on these technologies are described and compared with the gold standard of ELISA. Analytical sensitivity is defined and related to the limit of detection, and analytical specificity is defined and shown to depend on molecular proofreading steps, similar to those applied in living systems whenever there is a need for high fidelity. It is shown that at present neither mass spectrometry nor panels of affinity molecules offer the necessary combination of sensitivity and specificity required for multiplexed assays. In the second part of this review the growing numbers of assays that use additional proofreading steps to combine sensitivity with specificity are described. These include assays based on proximity ligation and slow off-rate modified aptamers. Finally the review considers what improvements might be possible in the near future, and concludes that further development of proteomics assays incorporating advanced proofreading steps are most likely to provide the necessary combination of sensitivity and specificity, without incurring high development costs.
Has1 regulates consecutive maturation and processing steps for assembly of 60S ribosomal subunits
Dembowski, Jill A.; Kuo, Benjamin; Woolford, John L.
2013-01-01
Ribosome biogenesis requires ∼200 assembly factors in Saccharomyces cerevisiae. The pre-ribosomal RNA (rRNA) processing defects associated with depletion of most of these factors have been characterized. However, how assembly factors drive the construction of ribonucleoprotein neighborhoods and how structural rearrangements are coupled to pre-rRNA processing are not understood. Here, we reveal ATP-independent and ATP-dependent roles of the Has1 DEAD-box RNA helicase in consecutive pre-rRNA processing and maturation steps for construction of 60S ribosomal subunits. Has1 associates with pre-60S ribosomes in an ATP-independent manner. Has1 binding triggers exonucleolytic trimming of 27SA3 pre-rRNA to generate the 5′ end of 5.8S rRNA and drives incorporation of ribosomal protein L17 with domain I of 5.8S/25S rRNA. ATP-dependent activity of Has1 promotes stable association of additional domain I ribosomal proteins that surround the polypeptide exit tunnel, which are required for downstream processing of 27SB pre-rRNA. Furthermore, in the absence of Has1, aberrant 27S pre-rRNAs are targeted for irreversible turnover. Thus, our data support a model in which Has1 helps to establish domain I architecture to prevent pre-rRNA turnover and couples domain I folding with consecutive pre-rRNA processing steps. PMID:23788678
Gidwani, S; Davidson, N; Trigkilidas, D; Blick, C; Harborne, R; Maurice, H D
2007-03-01
The British Orthopaedic Association published guidelines on the care of fragility fracture patients in 2003. A section of these guidelines relates to the secondary prevention of osteoporotic fractures. The objective of this audit was to compare practice in our fracture clinic to these guidelines, and take steps to improve our practice if required. We retrospectively audited the treatment of all 462 new patients seen in January and February 2004. Using case note analysis, 38 patients who had sustained probable fragility fractures were selected. Six months' post-injury, a telephone questionnaire was administered to confirm the nature of the injury and to find out whether the patient had been assessed, investigated or treated for osteoporosis. A second similar audit was conducted a year later after steps had been taken to improve awareness amongst the orthopaedic staff and prompt referral. During the first audit period, only 5 of 38 patients who should have been assessed and investigated for osteoporosis were either referred or offered referral. This improved to 23 out of 43 patients during the second audit period. Improvements in referral and assessment rates of patients at risk of further fragility fractures can be achieved relatively easily by taking steps to increase awareness amongst orthopaedic surgeons, although additional strategies and perhaps the use of automated referral systems may be required to achieve referral rates nearer 100%.
To repair or not to repair: with FAVOR there is no question
NASA Astrophysics Data System (ADS)
Garetto, Anthony; Schulz, Kristian; Tabbone, Gilles; Himmelhaus, Michael; Scheruebl, Thomas
2016-10-01
In the mask shop the challenges associated with today's advanced technology nodes, both technical and economic, are becoming increasingly difficult. The constant drive to continue shrinking features means more masks per device, smaller manufacturing tolerances and more complexity along the manufacturing line with respect to the number of manufacturing steps required. Furthermore, the extremely competitive nature of the industry makes it critical for mask shops to optimize asset utilization and processes in order to maximize their competitive advantage and, in the end, profitability. Full maximization of profitability in such a complex and technologically sophisticated environment simply cannot be achieved without the use of smart automation. Smart automation allows productivity to be maximized through better asset utilization and process optimization. Reliability is improved through the minimization of manual interactions leading to fewer human error contributions and a more efficient manufacturing line. In addition to these improvements in productivity and reliability, extra value can be added through the collection and cross-verification of data from multiple sources which provides more information about our products and processes. When it comes to handling mask defects, for instance, the process consists largely of time consuming manual interactions that are error prone and often require quick decisions from operators and engineers who are under pressure. The handling of defects itself is a multiple step process consisting of several iterations of inspection, disposition, repair, review and cleaning steps. Smaller manufacturing tolerances and features with higher complexity contribute to a higher number of defects which must be handled as well as a higher level of complexity. In this paper the recent efforts undertaken by ZEISS to provide solutions which address these challenges, particularly those associated with defectivity, will be presented. From automation of aerial image analysis to the use of data driven decision making to predict and propose the optimized back end of line process flow, productivity and reliability improvements are targeted by smart automation. Additionally the generation of the ideal aerial image from the design and several repair enhancement features offer additional capabilities to improve the efficiency and yield associated with defect handling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Labutti, Kurt; Foster, Brian; Lapidus, Alla
Gap Resolution is a software package that was developed to improve Newbler genome assemblies by automating the closure of sequence gaps caused by repetitive regions in the DNA. This is done by performing the follow steps:1) Identify and distribute the data for each gap in sub-projects. 2) Assemble the data associated with each sub-project using a secondary assembler, such as Newbler or PGA. 3) Determine if any gaps are closed after reassembly, and either design fakes (consensus of closed gap) for those that closed or lab experiments for those that require additional data. The software requires as input a genomemore » assembly produce by the Newbler assembler provided by Roche and 454 data containing paired-end reads.« less
Aban, Inmaculada B.; Wolfe, Gil I.; Cutter, Gary R.; Kaminski, Henry J.; Jaretzki, Alfred; Minisman, Greg; Conwit, Robin; Newsom-Davis, John
2008-01-01
We present our experience planning and launching a multinational, NIH/NINDS funded study of thymectomy in myasthenia gravis. We highlight the additional steps required for international sites and analyze and contrast the time investment required to bring U.S. and non-U.S. sites into full regulatory compliance. Results show the mean time for non- U.S. centers to achieve regulatory approval was significantly longer (mean 13.4 ± 0.96 months) than for U.S. sites (9.67 ± 0.74 months; p = 0.003, t-test). The delay for non- U.S. sites was mainly attributable to Federalwide Assurance certification and State Department clearance. PMID:18675464
DATA QUALITY OBJECTIVES FOR SELECTING WASTE SAMPLES FOR THE BENCH STEAM REFORMER TEST
DOE Office of Scientific and Technical Information (OSTI.GOV)
BANNING DL
2010-08-03
This document describes the data quality objectives to select archived samples located at the 222-S Laboratory for Fluid Bed Steam Reformer testing. The type, quantity and quality of the data required to select the samples for Fluid Bed Steam Reformer testing are discussed. In order to maximize the efficiency and minimize the time to treat Hanford tank waste in the Waste Treatment and Immobilization Plant, additional treatment processes may be required. One of the potential treatment processes is the fluid bed steam reformer (FBSR). A determination of the adequacy of the FBSR process to treat Hanford tank waste is required.more » The initial step in determining the adequacy of the FBSR process is to select archived waste samples from the 222-S Laboratory that will be used to test the FBSR process. Analyses of the selected samples will be required to confirm the samples meet the testing criteria.« less
Enzymatic routes for the synthesis of ursodeoxycholic acid.
Eggert, Thorsten; Bakonyi, Daniel; Hummel, Werner
2014-12-10
Ursodeoxycholic acid, a secondary bile acid, is used as a drug for the treatment of various liver diseases, the optimal dose comprises the range of 8-10mg/kg/day. For industrial syntheses, the structural complexity of this bile acid requires the use of an appropriate starting material as well as the application of regio- and enantio-selective enzymes for its derivatization. Most strategies for the synthesis start from cholic acid or chenodeoxycholic acid. The latter requires the conversion of the hydroxyl group at C-7 from α- into β-position in order to obtain ursodeoxycholic acid. Cholic acid on the other hand does not only require the same epimerization reaction at C-7 but the removal of the hydroxyl group at C-12 as well. There are several bacterial regio- and enantio-selective hydroxysteroid dehydrogenases (HSDHs) to carry out the desired reactions, for example 7α-HSDHs from strains of Clostridium, Bacteroides or Xanthomonas, 7β-HSDHs from Clostridium, Collinsella, or Ruminococcus, or 12α-HSDH from Clostridium or from Eggerthella. However, all these bioconversion reactions need additional steps for the regeneration of the coenzymes. Selected multi-step reaction systems for the synthesis of ursodeoxycholic acid are presented in this review. Copyright © 2014 Elsevier B.V. All rights reserved.
Utilizing collagen membranes for guided tissue regeneration-based root coverage.
Wang, Hom-Lay; Modarressi, Marmar; Fu, Jia-Hui
2012-06-01
Gingival recession is a common clinical problem that can result in hypersensitivity, pain, root caries and esthetic concerns. Conventional soft tissue procedures for root coverage require an additional surgical site, thereby causing additional trauma and donor site morbidity. In addition, the grafted tissues heal by repair, with formation of long junctional epithelium with some connective tissue attachment. Guided tissue regeneration-based root coverage was thus developed in an attempt to overcome these limitations while providing comparable clinical results. This paper addresses the biologic foundation of guided tissue regeneration-based root coverage, and describes the indications and contraindications for this technique, as well as the factors that influence outcomes. The step-by-step clinical techniques utilizing collagen membranes are also described. In comparison with conventional soft tissue procedures, the benefits of guided tissue regeneration-based root coverage procedures include new attachment formation, elimination of donor site morbidity, less chair-time, and unlimited availability and uniform thickness of the product. Collagen membranes, in particular, benefit from product biocompatibility with the host, while promoting chemotaxis, hemostasis, and exchange of gas and nutrients. Such characteristics lead to better wound healing by promoting primary wound coverage, angiogenesis, space creation and maintenance, and clot stability. In conclusion, collagen membranes are a reliable alternative for use in root coverage procedures. © 2012 John Wiley & Sons A/S.
NASA Astrophysics Data System (ADS)
Chen, Bing-Hong; Chuang, Shang-I.; Duh, Jenq-Gong
2016-11-01
Using spatial and interfacial control, the micro-sized silicon waste from wafer slurry could greatly increase its retention potential as a green resource for silicon-based anode in lithium ion batteries. Through step by step spatial and interfacial control for electrode, the cyclability of recycled waste gains potential performance from its original poor retention property. In the stages of spatial control, the electrode stabilizers of active, inactive and conductive additives were mixed into slurries for maintaining architecture and conductivity of electrode. In addition, a fusion electrode modification of interfacial control combines electrolyte additive, technique of double-plasma enhanced carbon shield (D-PECS) to convert the chemical bond states and to alter the formation of solid electrolyte interphases (SEIs) in the first cycle. The depth profiles of chemical composition from external into internal electrode illustrate that the fusion electrode modification not only forms a boundary to balance the interface between internal and external electrodes but also stabilizes the SEIs formation and soothe the expansion of micro-sized electrode. Through these effect approaches, the performance of micro-sized Si waste electrode can be boosted from its serious capacity degradation to potential retention (200 cycles, 1100 mAh/g) and better meet the requirements for facile and cost-effective in industrial production.
ERIC Educational Resources Information Center
Stille, J. K.
1981-01-01
Following a comparison of chain-growth and step-growth polymerization, focuses on the latter process by describing requirements for high molecular weight, step-growth polymerization kinetics, synthesis and molecular weight distribution of some linear step-growth polymers, and three-dimensional network step-growth polymers. (JN)
Mitchell, Laura E; Weinberg, Clarice R
2005-10-01
Diseases that develop during gestation may be influenced by the genotype of the mother and the inherited genotype of the embryo/fetus. However, given the correlation between maternal and offspring genotypes, differentiating between inherited and maternal genetic effects is not straightforward. The two-step transmission disequilibrium test was the first, family-based test proposed for the purpose of differentiating between maternal and offspring genetic effects. However, this approach, which requires data from "pents" comprising an affected child, mother, father, and maternal grandparents, provides biased tests for maternal genetic effects when the offspring genotype is associated with disease. An alternative approach based on transmissions from grandparents provides unbiased tests for maternal and offspring genetic effects but requires genotype information for paternal grandparents in addition to pents. The authors have developed two additional, pent-based approaches for the evaluation of maternal and offspring genetic effects. One approach requires the assumption of genetic mating type symmetry (pent-1), whereas the other does not (pent-2). Simulation studies demonstrate that both of these approaches provide valid estimation and testing for offspring and maternal genotypic effects. In addition, the power of the pent-1 approach is comparable with that of the approach based on data using all four grandparents.
Does It Really Matter Where You Look When Walking on Stairs? Insights from a Dual-Task Study
Miyasike-daSilva, Veronica; McIlroy, William E.
2012-01-01
Although the visual system is known to provide relevant information to guide stair locomotion, there is less understanding of the specific contributions of foveal and peripheral visual field information. The present study investigated the specific role of foveal vision during stair locomotion and ground-stairs transitions by using a dual-task paradigm to influence the ability to rely on foveal vision. Fifteen healthy adults (26.9±3.3 years; 8 females) ascended a 7-step staircase under four conditions: no secondary tasks (CONTROL); gaze fixation on a fixed target located at the end of the pathway (TARGET); visual reaction time task (VRT); and auditory reaction time task (ART). Gaze fixations towards stair features were significantly reduced in TARGET and VRT compared to CONTROL and ART. Despite the reduced fixations, participants were able to successfully ascend stairs and rarely used the handrail. Step time was increased during VRT compared to CONTROL in most stair steps. Navigating on the transition steps did not require more gaze fixations than the middle steps. However, reaction time tended to increase during locomotion on transitions suggesting additional executive demands during this phase. These findings suggest that foveal vision may not be an essential source of visual information regarding stair features to guide stair walking, despite the unique control challenges at transition phases as highlighted by phase-specific challenges in dual-tasking. Instead, the tendency to look at the steps in usual conditions likely provides a stable reference frame for extraction of visual information regarding step features from the entire visual field. PMID:22970297
Hydraulic Design of Stepped Spillways Workshop
USDA-ARS?s Scientific Manuscript database
Stepped chutes and spillways are commonly used for routing discharges during flood events. In addition, stepped chutes are used for overtopping protection of earthen embankments. Stepped spillways provide significant energy dissipation due to its stepped feature; as a result, the stilling basin as...
Pedometer determined physical activity tracks in African American adults: the Jackson Heart Study.
Newton, Robert L; M, Hongmei Han; Dubbert, Patricia M; Johnson, William D; Hickson, DeMarc A; Ainsworth, Barbara; Carithers, Teresa; Taylor, Herman; Wyatt, Sharon; Tudor-Locke, Catrine
2012-04-18
This study investigated the number of pedometer assessment occasions required to establish habitual physical activity in African American adults. African American adults (mean age 59.9 ± 0.60 years; 59 % female) enrolled in the Diet and Physical Activity Substudy of the Jackson Heart Study wore Yamax pedometers during 3-day monitoring periods, assessed on two to three distinct occasions, each separated by approximately one month. The stability of pedometer measured PA was described as differences in mean steps/day across time, as intraclass correlation coefficients (ICC) by sex, age, and body mass index (BMI) category, and as percent of participants changing steps/day quartiles across time. Valid data were obtained for 270 participants on either two or three different assessment occasions. Mean steps/day were not significantly different across assessment occasions (p values > 0.456). The overall ICCs for steps/day assessed on either two or three occasions were 0.57 and 0.76, respectively. In addition, 85 % (two assessment occasions) and 76 % (three assessment occasions) of all participants remained in the same steps/day quartile or changed one quartile over time. The current study shows that an overall mean steps/day estimate based on a 3-day monitoring period did not differ significantly over 4 - 6 months. The findings were robust to differences in sex, age, and BMI categories. A single 3-day monitoring period is sufficient to capture habitual physical activity in African American adults.
Cingi Steps for preoperative computer-assisted image editing before reduction rhinoplasty.
Cingi, Can Cemal; Cingi, Cemal; Bayar Muluk, Nuray
2014-04-01
The aim of this work is to provide a stepwise systematic guide for a preoperative photo-editing procedure for rhinoplasty cases involving the cooperation of a graphic artist and a surgeon. One hundred female subjects who planned to undergo a reduction rhinoplasty operation were included in this study. The Cingi Steps for Preoperative Computer Imaging (CS-PCI) program, a stepwise systematic guide for image editing using Adobe PhotoShop's "liquify" effect, was applied to the rhinoplasty candidates. The stages of CS-PCI are as follows: (1) lowering the hump; (2) shortening the nose; (3) adjusting the tip projection, (4) perfecting the nasal dorsum, (5) creating a supratip break, and (6) exaggerating the tip projection and/or dorsal slope. Performing the Cingi Steps allows the patient to see what will happen during the operation and observe the final appearance of his or her nose. After the application of described steps, 71 patients (71%) accepted step 4, and 21 (21%) of them accepted step 5. Only 10 patients (10%) wanted to make additional changes to their operation plans. The main benefits of using this method is that it decreases the time needed by the surgeon to perform a graphic analysis, and it reduces the time required for the patient to reach a decision about the procedure. It is an easy and reliable method that will provide improved physician-patient communication, increased patient confidence, and enhanced surgical planning while limiting the time needed for planning. © 2014 ARS-AAOA, LLC.
Surface Modified Particles By Multi-Step Addition And Process For The Preparation Thereof
Cook, Ronald Lee; Elliott, Brian John; Luebben, Silvia DeVito; Myers, Andrew William; Smith, Bryan Matthew
2006-01-17
The present invention relates to a new class of surface modified particles and to a multi-step surface modification process for the preparation of the same. The multi-step surface functionalization process involves two or more reactions to produce particles that are compatible with various host systems and/or to provide the particles with particular chemical reactivities. The initial step comprises the attachment of a small organic compound to the surface of the inorganic particle. The subsequent steps attach additional compounds to the previously attached organic compounds through organic linking groups.
Chokeshai-u-saha, Kaj; Buranapraditkun, Supranee; Jacquet, Alain; Nguyen, Catherine; Ruxrungtham, Kiat
2012-09-01
To study the role of human naïve B cells in antigen presentation and stimulation to naïve CD4+ T cell, a suitable method to reproducibly isolate sufficient naïve B cells is required. To improve the purity of isolated naive B cells obtained from a conventional one-step magnetic bead method, we added a rosetting step to enrich total B cell isolates from human whole blood samples prior to negative cell sorting by magnetic beads. The acquired naïve B cells were analyzed for phenotypes and for their role in Staphylococcal enterotoxin B (SEB) presentation to naïve CD4+ T cells. The mean (SD) naïve B cell (CD19+/CD27-) purity obtained from this two-step method compared with the one-step method was 97% (1.0) versus 90% (1.2), respectively. This two-step method can be used with a sample of whole blood as small as 10 ml. The isolated naive B cells were phenotypically at a resting state and were able to prime naïve CD4+ T cell activation by Staphylococcal enterotoxin B (SEB) presentation. This two-step non-flow cytometry-based approach improved the purity of isolated naïve B cells compared with conventional one-step magnetic bead method. It also worked well with a small blood volume. In addition, this study showed that the isolated naïve B cells can present a super-antigen "SEB" to activate naïve CD4 cells. These methods may thus be useful for further in vitro characterization of human naïve B cells and their roles as antigen presenting cells in various diseases.
Ito, M; Clark, C W; Mortimore, M; Goh, J B; Martin, S F
2001-08-22
A linear synthesis of the indole alkaloid (+/-)-akuammicine (2) was completed by a novel sequence of reactions requiring only 10 steps from commercially available starting materials. The approach features a tandem vinylogous Mannich addition and an intramolecular hetero Diels-Alder reaction to rapidly assemble the pentacyclic heteroyohimboid derivative 8 from the readily available hydrocarboline 6. Oxidation of the E ring of 8 gave the lactone 9 that was converted into deformylgeissoschizine (11). The subsequent elaboration of 11 into 2 was effected by a biomimetically patterned transformation that involved sequential oxidation and base-induced skeletal reorganization. A variation of these tactics was then applied to the synthesis of the C(18) hydroxylated akuammicine derivative 36. Because 36 had previously been converted into strychnine (1) in four steps, its preparation constitutes a concise, formal synthesis of this complex alkaloid.
Airspace Operations Demo Functional Requirements Matrix
NASA Technical Reports Server (NTRS)
2005-01-01
The Flight IPT assessed the reasonableness of demonstrating each of the Access 5 Step 1 functional requirements. The functional requirements listed in this matrix are from the September 2005 release of the Access 5 Functional Requirements Document. The demonstration mission considered was a notional Western US mission (WUS). The conclusion of the assessment is that 90% of the Access 5 Step 1 functional requirements can be demonstrated using the notional Western US mission.
Implementation of Competency-Based Pharmacy Education (CBPE)
Koster, Andries; Schalekamp, Tom; Meijerman, Irma
2017-01-01
Implementation of competency-based pharmacy education (CBPE) is a time-consuming, complicated process, which requires agreement on the tasks of a pharmacist, commitment, institutional stability, and a goal-directed developmental perspective of all stakeholders involved. In this article the main steps in the development of a fully-developed competency-based pharmacy curriculum (bachelor, master) are described and tips are given for a successful implementation. After the choice for entering into CBPE is made and a competency framework is adopted (step 1), intended learning outcomes are defined (step 2), followed by analyzing the required developmental trajectory (step 3) and the selection of appropriate assessment methods (step 4). Designing the teaching-learning environment involves the selection of learning activities, student experiences, and instructional methods (step 5). Finally, an iterative process of evaluation and adjustment of individual courses, and the curriculum as a whole, is entered (step 6). Successful implementation of CBPE requires a system of effective quality management and continuous professional development as a teacher. In this article suggestions for the organization of CBPE and references to more detailed literature are given, hoping to facilitate the implementation of CBPE. PMID:28970422
Zhao, Fanglong; Zhang, Chuanbo; Yin, Jing; Shen, Yueqi; Lu, Wenyu
2015-08-01
In this paper, a two-step resin adsorption technology was investigated for spinosad production and separation as follows: the first step resin addition into the fermentor at early cultivation period to decrease the timely product concentration in the broth; the second step of resin addition was used after fermentation to adsorb and extract the spinosad. Based on this, a two-step macroporous resin adsorption-membrane separation process for spinosad fermentation, separation, and purification was established. Spinosad concentration in 5-L fermentor increased by 14.45 % after adding 50 g/L macroporous at the beginning of fermentation. The established two-step macroporous resin adsorption-membrane separation process got the 95.43 % purity and 87 % yield for spinosad, which were both higher than that of the conventional crystallization of spinosad from aqueous phase that were 93.23 and 79.15 % separately. The two-step macroporous resin adsorption method has not only carried out the coupling of spinosad fermentation and separation but also increased spinosad productivity. In addition, the two-step macroporous resin adsorption-membrane separation process performs better in spinosad yield and purity.
Doe, John E.; Lander, Deborah R.; Doerrer, Nancy G.; Heard, Nina; Hines, Ronald N.; Lowit, Anna B.; Pastoor, Timothy; Phillips, Richard D.; Sargent, Dana; Sherman, James H.; Young Tanir, Jennifer; Embry, Michelle R.
2016-01-01
Abstract The HESI-coordinated RISK21 roadmap and matrix are tools that provide a transparent method to compare exposure and toxicity information and assess whether additional refinement is required to obtain the necessary precision level for a decision regarding safety. A case study of the use of a pyrethroid, “pseudomethrin,” in bed netting to control malaria is presented to demonstrate the application of the roadmap and matrix. The evaluation began with a problem formulation step. The first assessment utilized existing information pertaining to the use and the class of chemistry. At each stage of the step-wise approach, the precision of the toxicity and exposure estimates were refined as necessary by obtaining key data which enabled a decision on safety to be made efficiently and with confidence. The evaluation demonstrated the concept of using existing information within the RISK21 matrix to drive the generation of additional data using a value-of-information approach. The use of the matrix highlighted whether exposure or toxicity required further investigation and emphasized the need to address the default uncertainty factor of 100 at the highest tier of the evaluation. It also showed how new methodology such as the use of in vitro studies and assays could be used to answer the specific questions which arise through the use of the matrix. The matrix also serves as a useful means to communicate progress to stakeholders during an assessment of chemical use. PMID:26517449
Material characterization for morphing purposes in order to match flight requirements
NASA Astrophysics Data System (ADS)
Geier, Sebastian; Kintscher, Markus; Heintze, Olaf; Wierach, Peter; Monner, Hans-Peter; Wiedemann, Martin
2012-04-01
Natural laminar flow is one of the challenging aims of the current aerospace research. Main reasons for the aerodynamic transition from laminar into turbulent flow focusing on the airfoil-structure is the aerodynamic shape and the surface roughness. The Institute of Composite Structures and Adaptive Systems at the German Aerospace Center in Braunschweig works on the optimization of the aerodynamic-loaded structure of future aircrafts in order to increase their efficiency. Providing wing structures suited for natural laminar flow is a step towards this goal. Regarding natural laminar flow, the structural design of the leading edge of a wing is of special interest. An approach for a gap-less leading edge was developed to provide a gap- and step-less high quality surface suited for natural laminar flow and to reduce slat noise. In a national project the first generation of the 3D full scale demonstrator was successfully tested in 2010. The prototype consists of several new technologies, opening up the issue of matching the long and challenging list of airworthiness requirements simultaneously. Therefore the developed composite structure was intensively tested for further modifications according to meet requirements for abrasion, impact and deicing basically. The former presented structure consists completely of glass-fiber-prepreg (GFRP-prepreg). New functions required the addition of a new material-mix, which has to fit into the manufacturing-chain of the composite structure. In addition the hybrid composites have to withstand high loadings, high bending-induced strains (1%) and environmentally influenced aging. Moreover hot-wet cycling tests are carried out for the basic GFRP-structure in order to simulate the long term behavior of the material under extrem conditions. The presented paper shows results of four-points-bending-tests of the most critical section of the morphing leading edge device. Different composite-hybrids are built up and processed. An experimental based trend towards an optimized material design will be shown.
Watanabe, Tatsunori; Tsutou, Kotaro; Saito, Kotaro; Ishida, Kazuto; Tanabe, Shigeo; Nojima, Ippei
2016-11-01
Choice reaction requires response conflict resolution, and the resolution processes that occur during a choice stepping reaction task undertaken in a standing position, which requires maintenance of balance, may be different to those processes occurring during a choice reaction task performed in a seated position. The study purpose was to investigate the resolution processes during a choice stepping reaction task at the cortical level using electroencephalography and compare the results with a control task involving ankle dorsiflexion responses. Twelve young adults either stepped forward or dorsiflexed the ankle in response to a visual imperative stimulus presented on a computer screen. We used the Simon task and examined the error-related negativity (ERN) that follows an incorrect response and the correct-response negativity (CRN) that follows a correct response. Error was defined as an incorrect initial weight transfer for the stepping task and as an incorrect initial tibialis anterior activation for the control task. Results revealed that ERN and CRN amplitudes were similar in size for the stepping task, whereas the amplitude of ERN was larger than that of CRN for the control task. The ERN amplitude was also larger in the stepping task than the control task. These observations suggest that a choice stepping reaction task involves a strategy emphasizing post-response conflict and general performance monitoring of actual and required responses and also requires greater cognitive load than a choice dorsiflexion reaction. The response conflict resolution processes appear to be different for stepping tasks and reaction tasks performed in a seated position.
Postural adjustment errors during lateral step initiation in older and younger adults
Sparto, Patrick J.; Fuhrman, Susan I.; Redfern, Mark S.; Perera, Subashan; Jennings, J. Richard; Furman, Joseph M.
2016-01-01
The purpose was to examine age differences and varying levels of step response inhibition on the performance of a voluntary lateral step initiation task. Seventy older adults (70 – 94 y) and twenty younger adults (21 – 58 y) performed visually-cued step initiation conditions based on direction and spatial location of arrows, ranging from a simple choice reaction time task to a perceptual inhibition task that included incongruous cues about which direction to step (e.g. a left pointing arrow appearing on the right side of a monitor). Evidence of postural adjustment errors and step latencies were recorded from vertical ground reaction forces exerted by the stepping leg. Compared with younger adults, older adults demonstrated greater variability in step behavior, generated more postural adjustment errors during conditions requiring inhibition, and had greater step initiation latencies that increased more than younger adults as the inhibition requirements of the condition became greater. Step task performance was related to clinical balance test performance more than executive function task performance. PMID:25595953
Postural adjustment errors during lateral step initiation in older and younger adults
Sparto, Patrick J.; Fuhrman, Susan I.; Redfern, Mark S.; Perera, Subashan; Jennings, J. Richard; Furman, Joseph M.
2014-01-01
The purpose was to examine age differences and varying levels of step response inhibition on the performance of a voluntary lateral step initiation task. Seventy older adults (70 – 94 y) and twenty younger adults (21 – 58 y) performed visually-cued step initiation conditions based on direction and spatial location of arrows, ranging from a simple choice reaction time task to a perceptual inhibition task that included incongruous cues about which direction to step (e.g. a left pointing arrow appearing on the right side of a monitor). Evidence of postural adjustment errors and step latencies were recorded from vertical ground reaction forces exerted by the stepping leg. Compared with younger adults, older adults demonstrated greater variability in step behavior, generated more postural adjustment errors during conditions requiring inhibition, and had greater step initiation latencies that increased more than younger adults as the inhibition requirements of the condition became greater. Step task performance was related to clinical balance test performance more than executive function task performance. PMID:25183162
Aparna, Deshpande; Kumar, Sunil; Kamalkumar, Shukla
2017-10-27
To determine percentage of patients of necrotizing pancreatitis (NP) requiring intervention and the types of interventions performed. Outcomes of patients of step up necrosectomy to those of direct necrosectomy were compared. Operative mortality, overall mortality, morbidity and overall length of stay were determined. After institutional ethics committee clearance and waiver of consent, records of patients of pancreatitis were reviewed. After excluding patients as per criteria, epidemiologic and clinical data of patients of NP was noted. Treatment protocol was reviewed. Data of patients in whom step-up approach was used was compared to those in whom it was not used. A total of 41 interventions were required in 39% patients. About 60% interventions targeted the pancreatic necrosis while the rest were required to deal with the complications of the necrosis. Image guided percutaneous catheter drainage was done in 9 patients for infected necrosis all of whom required further necrosectomy and in 3 patients with sterile necrosis. Direct retroperitoneal or anterior necrosectomy was performed in 15 patients. The average time to first intervention was 19.6 d in the non step-up group (range 11-36) vs 18.22 d in the Step-up group (range 13-25). The average hospital stay in non step-up group was 33.3 d vs 38 d in step up group. The mortality in the step-up group was 0% (0/9) vs 13% (2/15) in the non step up group. Overall mortality was 10.3% while post-operative mortality was 8.3%. Average hospital stay was 22.25 d. Early conservative management plays an important role in management of NP. In patients who require intervention, the approach used and the timing of intervention should be based upon the clinical condition and local expertise available. Delaying intervention and use of minimal invasive means when intervention is necessary is desirable. The step-up approach should be used whenever possible. Even when the classical retroperitoneal catheter drainage is not feasible, there should be an attempt to follow principles of step-up technique to buy time. The outcome of patients in the step-up group compared to the non step-up group is comparable in our series. Interventions for bowel diversion, bypass and hemorrhage control should be done at the appropriate times.
NASA Technical Reports Server (NTRS)
Liebowitz, J.
1986-01-01
The development of an expert system prototype for software functional requirement determination for NASA Goddard's Command Management System, as part of its process of transforming general requests into specific near-earth satellite commands, is described. The present knowledge base was formulated through interactions with domain experts, and was then linked to the existing Knowledge Engineering Systems (KES) expert system application generator. Steps in the knowledge-base development include problem-oriented attribute hierarchy development, knowledge management approach determination, and knowledge base encoding. The KES Parser and Inspector, in addition to backcasting and analogical mapping, were used to validate the expert system-derived requirements for one of the major functions of a spacecraft, the solar Maximum Mission. Knowledge refinement, evaluation, and implementation procedures of the expert system were then accomplished.
Flue gas conditioning for improved particle collection in electrostatic precipitators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durham, M.D.
1992-04-27
The purpose of this research program is to identify and evaluate a variety of additives capable of increasing particle cohesion which could be used for improving collection efficiency in an ESP. A three-phase screening process will be used to provide the, evaluation of many additives in a logical and cost-effective manner. The three step approach involves the following experimental setups: 1. Provide a preliminary screening in the laboratory by measuring the effects of various conditioning agents on reentrainment of flyash particles in an electric field operating at simulated flue gas conditions. 2. Evaluate the successful additives using a 100 acfmmore » bench-scale ESP operating on actual flue gas. 3. Obtain the data required for scaling up the technology by testing the two or three most promising conditioning agents at the pilot scale.« less
Reconnaissance and Autonomy for Small Robots (RASR) team: MAGIC 2010 challenge
NASA Astrophysics Data System (ADS)
Lacaze, Alberto; Murphy, Karl; Del Giorno, Mark; Corley, Katrina
2012-06-01
The Reconnaissance and Autonomy for Small Robots (RASR) team developed a system for the coordination of groups of unmanned ground vehicles (UGVs) that can execute a variety of military relevant missions in dynamic urban environments. Historically, UGV operations have been primarily performed via tele-operation, requiring at least one dedicated operator per robot, and requiring substantial real-time bandwidth to accomplish those missions. Our team goal was to develop a system that can provide long-term value to the war-fighter, utilizing MAGIC-2010 as a stepping stone. To that end, we self-imposed a set of constraints that would force us to develop technology that could readily be used by the military in the near term: • Use a relevant (deployed) platform • Use low-cost, reliable sensors • Develop an expandable and modular control system with innovative software algorithms to minimize the computing footprint required • Minimize required communications bandwidth and handle communication losses • Minimize additional power requirements to maximize battery life and mission duration
Group sequential designs for stepped-wedge cluster randomised trials
Grayling, Michael J; Wason, James MS; Mander, Adrian P
2017-01-01
Background/Aims: The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Methods: Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. Results: We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial’s type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. Conclusion: The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into stepped-wedge cluster randomised trials according to the needs of the particular trial. PMID:28653550
Group sequential designs for stepped-wedge cluster randomised trials.
Grayling, Michael J; Wason, James Ms; Mander, Adrian P
2017-10-01
The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial's type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into stepped-wedge cluster randomised trials according to the needs of the particular trial.
Masood, Athar; Stark, Ken D; Salem, Norman
2005-10-01
Conventional sample preparation for fatty acid analysis is a complicated, multiple-step process, and gas chromatography (GC) analysis alone can require >1 h per sample to resolve fatty acid methyl esters (FAMEs). Fast GC analysis was adapted to human plasma FAME analysis using a modified polyethylene glycol column with smaller internal diameters, thinner stationary phase films, increased carrier gas linear velocity, and faster temperature ramping. Our results indicated that fast GC analyses were comparable to conventional GC in peak resolution. A conventional transesterification method based on Lepage and Roy was simplified to a one-step method with the elimination of the neutralization and centrifugation steps. A robotics-amenable method was also developed, with lower methylation temperatures and in an open-tube format using multiple reagent additions. The simplified methods produced results that were quantitatively similar and with similar coefficients of variation as compared with the original Lepage and Roy method. The present streamlined methodology is suitable for the direct fatty acid analysis of human plasma, is appropriate for research studies, and will facilitate large clinical trials and make possible population studies.
A method for real-time generation of augmented reality work instructions via expert movements
NASA Astrophysics Data System (ADS)
Bhattacharya, Bhaskar; Winer, Eliot
2015-03-01
Augmented Reality (AR) offers tremendous potential for a wide range of fields including entertainment, medicine, and engineering. AR allows digital models to be integrated with a real scene (typically viewed through a video camera) to provide useful information in a variety of contexts. The difficulty in authoring and modifying scenes is one of the biggest obstacles to widespread adoption of AR. 3D models must be created, textured, oriented and positioned to create the complex overlays viewed by a user. This often requires using multiple software packages in addition to performing model format conversions. In this paper, a new authoring tool is presented which uses a novel method to capture product assembly steps performed by a user with a depth+RGB camera. Through a combination of computer vision and imaging process techniques, each individual step is decomposed into objects and actions. The objects are matched to those in a predetermined geometry library and the actions turned into animated assembly steps. The subsequent instruction set is then generated with minimal user input. A proof of concept is presented to establish the method's viability.
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less
A bioreactor system for the nitrogen loop in a Controlled Ecological Life Support System
NASA Technical Reports Server (NTRS)
Saulmon, M. M.; Reardon, K. F.; Sadeh, W. Z.
1996-01-01
As space missions become longer in duration, the need to recycle waste into useful compounds rises dramatically. This problem can be addressed by the development of Controlled Ecological Life Support Systems (CELSS) (i.e., Engineered Closed/Controlled Eco-Systems (ECCES)), consisting of human and plant modules. One of the waste streams leaving the human module is urine. In addition to the reclamation of water from urine, recovery of the nitrogen is important because it is an essential nutrient for the plant module. A 3-step biological process for the recycling of nitrogenous waste (urea) is proposed. A packed-bed bioreactor system for this purpose was modeled, and the issues of reaction step segregation, reactor type and volume, support particle size, and pressure drop were addressed. Based on minimization of volume, a bioreactor system consisting of a plug flow immobilized urease reactor, a completely mixed flow immobilized cell reactor to convert ammonia to nitrite, and a plug flow immobilized cell reactor to produce nitrate from nitrite is recommended. It is apparent that this 3-step bioprocess meets the requirements for space applications.
Bērziņš, Agris; Actiņš, Andris
2014-06-01
The dehydration kinetics of mildronate dihydrate [3-(1,1,1-trimethylhydrazin-1-ium-2-yl)propionate dihydrate] was analyzed in isothermal and nonisothermal modes. The particle size, sample preparation and storage, sample weight, nitrogen flow rate, relative humidity, and sample history were varied in order to evaluate the effect of these factors and to more accurately interpret the data obtained from such analysis. It was determined that comparable kinetic parameters can be obtained in both isothermal and nonisothermal mode. However, dehydration activation energy values obtained in nonisothermal mode showed variation with conversion degree because of different rate-limiting step energy at higher temperature. Moreover, carrying out experiments in this mode required consideration of additional experimental complications. Our study of the different sample and experimental factor effect revealed information about changes of the dehydration rate-limiting step energy, variable contribution from different rate limiting steps, as well as clarified the dehydration mechanism. Procedures for convenient and fast determination of dehydration kinetic parameters were offered. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
A Review of High-Order and Optimized Finite-Difference Methods for Simulating Linear Wave Phenomena
NASA Technical Reports Server (NTRS)
Zingg, David W.
1996-01-01
This paper presents a review of high-order and optimized finite-difference methods for numerically simulating the propagation and scattering of linear waves, such as electromagnetic, acoustic, or elastic waves. The spatial operators reviewed include compact schemes, non-compact schemes, schemes on staggered grids, and schemes which are optimized to produce specific characteristics. The time-marching methods discussed include Runge-Kutta methods, Adams-Bashforth methods, and the leapfrog method. In addition, the following fourth-order fully-discrete finite-difference methods are considered: a one-step implicit scheme with a three-point spatial stencil, a one-step explicit scheme with a five-point spatial stencil, and a two-step explicit scheme with a five-point spatial stencil. For each method studied, the number of grid points per wavelength required for accurate simulation of wave propagation over large distances is presented. Recommendations are made with respect to the suitability of the methods for specific problems and practical aspects of their use, such as appropriate Courant numbers and grid densities. Avenues for future research are suggested.
Thermal design of the hard x-ray imager and the soft gamma-ray detector onboard ASTRO-H
NASA Astrophysics Data System (ADS)
Noda, Hirofumi; Nakazawa, Kazuhiro; Makishima, Kazuo; Iwata, Naoko; Ogawa, Hiroyuki; Ohta, Masayuki; Sato, Goro; Kawaharada, Madoka; Watanabe, Shin; Kokubun, Motohide; Takahashi, Tadayuki; Ohno, Masanori; Fukazawa, Yasushi; Tajima, Hiroyasu; Uchiyama, Hideki; Ito, Shuji; Fukuzawa, Keita
2014-07-01
The Hard X-ray Imager and the Soft Gamma-ray Detector, onboard the 6th Japanese X-ray satellite ASTRO-H, aim at unprecedentedly-sensitive observations in the 5-80 keV and 40-600 keV bands, respectively. Because their main sensors are composed of a number of semi-conductor devices, which need to be operated in a temperature of -20 to -15°C, heat generated in the sensors must be efficiently transported outwards by thermal conduction. For this purpose, we performed thermal design, with the following three steps. First, we additionally included thermally-conductive parts, copper poles and graphite sheets. Second, constructing a thermal mathematical model of the sensors, we estimated temperature distributions in thermal equilibria. Since the model had rather large uncertainties in contact thermal conductions, an accurate thermal dummy was constructed as our final step. Vacuum measurement with the dummy successfully reduced the conductance uncertainties. With these steps, we confirmed that our thermal design of the main sensors satisfies the temperature requirement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wemhoff, A P; Burnham, A K; Nichols III, A L
The reduction of the number of reactions in kinetic models for both the HMX beta-delta phase transition and thermal cookoff provides an attractive alternative to traditional multi-stage kinetic models due to reduced calibration effort requirements. In this study, we use the LLNL code ALE3D to provide calibrated kinetic parameters for a two-reaction bidirectional beta-delta HMX phase transition model based on Sandia Instrumented Thermal Ignition (SITI) and Scaled Thermal Explosion (STEX) temperature history curves, and a Prout-Tompkins cookoff model based on One-Dimensional Time to Explosion (ODTX) data. Results show that the two-reaction bidirectional beta-delta transition model presented here agrees as wellmore » with STEX and SITI temperature history curves as a reversible four-reaction Arrhenius model, yet requires an order of magnitude less computational effort. In addition, a single-reaction Prout-Tompkins model calibrated to ODTX data provides better agreement with ODTX data than a traditional multi-step Arrhenius model, and can contain up to 90% less chemistry-limited time steps for low-temperature ODTX simulations. Manual calibration methods for the Prout-Tompkins kinetics provide much better agreement with ODTX experimental data than parameters derived from Differential Scanning Calorimetry (DSC) measurements at atmospheric pressure. The predicted surface temperature at explosion for STEX cookoff simulations is a weak function of the cookoff model used, and a reduction of up to 15% of chemistry-limited time steps can be achieved by neglecting the beta-delta transition for this type of simulation. Finally, the inclusion of the beta-delta transition model in the overall kinetics model can affect the predicted time to explosion by 1% for the traditional multi-step Arrhenius approach, while up to 11% using a Prout-Tompkins cookoff model.« less
Dyed positive photoresist employing curcumin for notching control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renschler, C.L.; Lemen, E.K.; Rodriquez, J.L.
1989-01-01
A variety of dyes have been proposed as absorbers for photoresists. The nonbleachable absorbance incorporated in this way can result in a reduction in standing waves and/or reflective notching (nonuniform linewidths due to reflections off the substrate). In addition, it can provide increased visual contrast at the patterned resist inspection stage. A variation on the visual contrast improvement involves the use of a dye which fluoresces, allowing for more precise resist metrology. The deposition and growth processes used here result in large oxide and polycrystalline silicon steps with high aspect ratios. Processes such as LOCOS which allow less severe topography,more » are inherently radiation soft and cannot be used on our fabrication process. The resulting steep sidewalls make metal coverage and etch significantly more difficult. Thus, the glass layer which isolates metal conductors from oxide steps is smoothed with either a thermal reflow or a plasma etch of a partially planarized coating to reduce the aspect ratio of the step. This results in metal coverage with a 45{degrees} angle at each step, which causes severe notching. This paper reports on an attempt to develop a resist which would eliminate this notching problem. Although the addition of unbleachable dye to photoresist for the control of notching is well established, most of the dyes used have one or more serious shortcomings, such as a poor match of the absorption spectrum with the exposure spectrum or low solubility in resist. Severe notching requires the use of a resist dye with optimized physical and spectroscopic properties to allow high optical absorbance to be achieved without the onset of other problems such as particulate formation or changes in thermal properties.« less
Complement Regulator Factor H Mediates a Two-step Uptake of Streptococcus pneumoniae by Human Cells*
Agarwal, Vaibhav; Asmat, Tauseef M.; Luo, Shanshan; Jensch, Inga; Zipfel, Peter F.; Hammerschmidt, Sven
2010-01-01
Streptococcus pneumoniae, a human pathogen, recruits complement regulator factor H to its bacterial cell surface. The bacterial PspC protein binds Factor H via short consensus repeats (SCR) 8–11 and SCR19–20. In this study, we define how bacterially bound Factor H promotes pneumococcal adherence to and uptake by epithelial cells or human polymorphonuclear leukocytes (PMNs) via a two-step process. First, pneumococcal adherence to epithelial cells was significantly reduced by heparin and dermatan sulfate. However, none of the glycosaminoglycans affected binding of Factor H to pneumococci. Adherence of pneumococci to human epithelial cells was inhibited by monoclonal antibodies recognizing SCR19–20 of Factor H suggesting that the C-terminal glycosaminoglycan-binding region of Factor H mediates the contact between pneumococci and human cells. Blocking of the integrin CR3 receptor, i.e. CD11b and CD18, of PMNs or CR3-expressing epithelial cells reduced significantly the interaction of pneumococci with both cell types. Similarly, an additional CR3 ligand, Pra1, derived from Candida albicans, blocked the interaction of pneumococci with PMNs. Strikingly, Pra1 inhibited also pneumococcal uptake by lung epithelial cells but not adherence. In addition, invasion of Factor H-coated pneumococci required the dynamics of host-cell actin microfilaments and was affected by inhibitors of protein-tyrosine kinases and phosphatidylinositol 3-kinase. In conclusion, pneumococcal entry into host cells via Factor H is based on a two-step mechanism. The first and initial contact of Factor H-coated pneumococci is mediated by glycosaminoglycans expressed on the surface of human cells, and the second step, pneumococcal uptake, is integrin-mediated and depends on host signaling molecules such as phosphatidylinositol 3-kinase. PMID:20504767
Code of Federal Regulations, 2011 CFR
2011-07-01
... accordance with movement requirements of high-voltage power centers and portable transformers (§ 75.812) and... transformer. A step-up transformer is a transformer that steps up the low or medium voltage to high voltage... supplying low or medium voltage to the step-up transformer must meet the applicable requirements of 30 CFR...
Code of Federal Regulations, 2013 CFR
2013-07-01
... accordance with movement requirements of high-voltage power centers and portable transformers (§ 75.812) and... transformer. A step-up transformer is a transformer that steps up the low or medium voltage to high voltage... supplying low or medium voltage to the step-up transformer must meet the applicable requirements of 30 CFR...
Code of Federal Regulations, 2012 CFR
2012-07-01
... accordance with movement requirements of high-voltage power centers and portable transformers (§ 75.812) and... transformer. A step-up transformer is a transformer that steps up the low or medium voltage to high voltage... supplying low or medium voltage to the step-up transformer must meet the applicable requirements of 30 CFR...
Code of Federal Regulations, 2010 CFR
2010-07-01
... accordance with movement requirements of high-voltage power centers and portable transformers (§ 75.812) and... transformer. A step-up transformer is a transformer that steps up the low or medium voltage to high voltage... supplying low or medium voltage to the step-up transformer must meet the applicable requirements of 30 CFR...
Code of Federal Regulations, 2014 CFR
2014-07-01
... accordance with movement requirements of high-voltage power centers and portable transformers (§ 75.812) and... transformer. A step-up transformer is a transformer that steps up the low or medium voltage to high voltage... supplying low or medium voltage to the step-up transformer must meet the applicable requirements of 30 CFR...
Lateral stepping for postural correction in Parkinson's disease.
King, Laurie A; Horak, Fay B
2008-03-01
To characterize the lateral stepping strategies for postural correction in patients with Parkinson's disease (PD) and the effect of their anti-parkinson medication. Observational study. Outpatient neuroscience laboratory. Thirteen participants with idiopathic PD in their on (PD on) and off (PD off) levodopa state and 14 healthy elderly controls. Movable platform with lateral translations of 12 cm at 14.6 cm/s ramp velocity. The incidence and characteristics of 3 postural strategies were observed: lateral side-step, crossover step, or no step. Corrective stepping was characterized by latency to step after perturbation onset, step velocity, and step length and presence of an anticipatory postural adjustment (APA). Additionally, percentages of trials resulting in falls were identified for each group. Whereas elderly control participants never fell, PD participants fell in 24% and 35% of trials in the on and off medication states, respectively. Both PD and control participants most often used a lateral side-step strategy; 70% (control), 67% (PD off), and 73% (PD on) of all trials, respectively. PD participants fell most often when using a crossover strategy (75% of all crossover trials) or no-step strategy (100% of all no-step trials). In the off medication state, PD participants' lateral stepping strategies were initiated later than controls (370+/-37 ms vs 280+/-10 ms, P<.01), and steps were smaller (254+/-20 mm vs 357+/-17 mm, P<.01) and slower (0.99+/-0.08 m/s vs 1.20+/-0.07 m/s, P<.05). No differences were found between the PD off versus PD on state in the corrective stepping characteristics. Unlike control participants, PD participants often (56% of side-step strategy trials) failed to activate an APA before stepping, although their APAs, when present, were of similar latency and magnitude as for control participants. Levodopa on or off state did not significantly affect falls, APAs, or lateral step latency, velocity, or amplitude (P>.05). PD participants showed significantly more postural instability and falls than age-matched controls when stepping was required for postural correction in response to lateral disequilibrium. Although PD participants usually used a similar lateral stepping strategy as controls in response to lateral translations, lack of an anticipatory lateral weight shift, and bradykinetic characteristics of the stepping responses help explain the greater rate of falls in participants with PD. Differences were not found between the levodopa on and off states. The results suggest that rehabilitation aimed at improving lateral stability in PD should include facilitating APAs before a lateral side-stepping strategy with faster and larger steps to recover equilibrium.
Dammermann, Alexander; Maddox, Paul S; Desai, Arshad; Oegema, Karen
2008-02-25
Centrioles are surrounded by pericentriolar material (PCM), which is proposed to promote new centriole assembly by concentrating gamma-tubulin. Here, we quantitatively monitor new centriole assembly in living Caenorhabditis elegans embryos, focusing on the conserved components SAS-4 and SAS-6. We show that SAS-4 and SAS-6 are coordinately recruited to the site of new centriole assembly and reach their maximum levels during S phase. Centriolar SAS-6 is subsequently reduced by a mechanism intrinsic to the early assembly pathway that does not require progression into mitosis. Centriolar SAS-4 remains in dynamic equilibrium with the cytoplasmic pool until late prophase, when it is stably incorporated in a step that requires gamma-tubulin and microtubule assembly. These results indicate that gamma-tubulin in the PCM stabilizes the nascent daughter centriole by promoting microtubule addition to its outer wall. Such a mechanism may help restrict new centriole assembly to the vicinity of preexisting parent centrioles that recruit PCM.
Bright, T J
2013-01-01
Many informatics studies use content analysis to generate functional requirements for system development. Explication of this translational process from qualitative data to functional requirements can strengthen the understanding and scientific rigor when applying content analysis in informatics studies. To describe a user-centered approach transforming emergent themes derived from focus group data into functional requirements for informatics solutions and to illustrate these methods to the development of an antibiotic clinical decision support system (CDS). THE APPROACH CONSISTED OF FIVE STEPS: 1) identify unmet therapeutic planning information needs via Focus Group Study-I, 2) develop a coding framework of therapeutic planning themes to refine the domain scope to antibiotic therapeutic planning, 3) identify functional requirements of an antibiotic CDS system via Focus Group Study-II, 4) discover informatics solutions and functional requirements from coded data, and 5) determine the types of information needed to support the antibiotic CDS system and link with the identified informatics solutions and functional requirements. The coding framework for Focus Group Study-I revealed unmet therapeutic planning needs. Twelve subthemes emerged and were clustered into four themes; analysis indicated a need for an antibiotic CDS intervention. Focus Group Study-II included five types of information needs. Comments from the Barrier/Challenge to information access and Function/Feature themes produced three informatics solutions and 13 functional requirements of an antibiotic CDS system. Comments from the Patient, Institution, and Domain themes generated required data elements for each informatics solution. This study presents one example explicating content analysis of focus group data and the analysis process to functional requirements from narrative data. Illustration of this 5-step method was used to develop an antibiotic CDS system, resolving unmet antibiotic prescribing needs. As a reusable approach, these techniques can be refined and applied to resolve unmet information needs with informatics interventions in additional domains.
Bright, T.J.
2013-01-01
Summary Background Many informatics studies use content analysis to generate functional requirements for system development. Explication of this translational process from qualitative data to functional requirements can strengthen the understanding and scientific rigor when applying content analysis in informatics studies. Objective To describe a user-centered approach transforming emergent themes derived from focus group data into functional requirements for informatics solutions and to illustrate these methods to the development of an antibiotic clinical decision support system (CDS). Methods The approach consisted of five steps: 1) identify unmet therapeutic planning information needs via Focus Group Study-I, 2) develop a coding framework of therapeutic planning themes to refine the domain scope to antibiotic therapeutic planning, 3) identify functional requirements of an antibiotic CDS system via Focus Group Study-II, 4) discover informatics solutions and functional requirements from coded data, and 5) determine the types of information needed to support the antibiotic CDS system and link with the identified informatics solutions and functional requirements. Results The coding framework for Focus Group Study-I revealed unmet therapeutic planning needs. Twelve subthemes emerged and were clustered into four themes; analysis indicated a need for an antibiotic CDS intervention. Focus Group Study-II included five types of information needs. Comments from the Barrier/Challenge to information access and Function/Feature themes produced three informatics solutions and 13 functional requirements of an antibiotic CDS system. Comments from the Patient, Institution, and Domain themes generated required data elements for each informatics solution. Conclusion This study presents one example explicating content analysis of focus group data and the analysis process to functional requirements from narrative data. Illustration of this 5-step method was used to develop an antibiotic CDS system, resolving unmet antibiotic prescribing needs. As a reusable approach, these techniques can be refined and applied to resolve unmet information needs with informatics interventions in additional domains. PMID:24454586
5 CFR 531.508 - Evaluation of quality step increase authority.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Evaluation of quality step increase... REGULATIONS PAY UNDER THE GENERAL SCHEDULE Quality Step Increases § 531.508 Evaluation of quality step... grant quality step increases. The agency shall take any corrective action required by the Office. [60 FR...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Step one. 14.503-1 Section... AND CONTRACT TYPES SEALED BIDDING Two-Step Sealed Bidding 14.503-1 Step one. (a) Requests for... use the two step method. (3) The requirements of the technical proposal. (4) The evaluation criteria...
NASA Technical Reports Server (NTRS)
Bourkland, Kristin L.; Liu, Kuo-Chia
2011-01-01
The Solar Dynamics Observatory (SDO) is a NASA spacecraft designed to study the Sun. It was launched on February 11, 2010 into a geosynchronous orbit, and uses a suite of attitude sensors and actuators to finely point the spacecraft at the Sun. SDO has three science instruments: the Atmospheric Imaging Assembly (AIA), the Helioseismic and Magnetic Imager (HMI), and the Extreme Ultraviolet Variability Experiment (EVE). SDO uses two High Gain Antennas (HGAs) to send science data to a dedicated ground station in White Sands, New Mexico. In order to meet the science data capture budget, the HGAs must be able to transmit data to the ground for a very large percentage of the time. Each HGA is a dual-axis antenna driven by stepper motors. Both antennas transmit data at all times, but only a single antenna is required in order to meet the transmission rate requirement. For portions of the year, one antenna or the other has an unobstructed view of the White Sands ground station. During other periods, however, the view from both antennas to the Earth is blocked for different portions of the day. During these times of blockage, the two HGAs take turns pointing to White Sands, with the other antenna pointing out to space. The HGAs handover White Sands transmission responsibilities to the unblocked antenna. There are two handover seasons per year, each lasting about 72 days, where the antennas hand off control every twelve hours. The non-tracking antenna slews back to the ground station by following a ground commanded trajectory and arrives approximately 5 minutes before the formerly tracking antenna slews away to point out into space. The SDO Attitude Control System (ACS) runs at 5 Hz, and the HGA Gimbal Control Electronics (GCE) run at 200 Hz. There are 40 opportunities for the gimbals to step each ACS cycle, with a hardware limitation of no more than one step every three GCE cycles. The ACS calculates the desired gimbal motion for tracking the ground station or for slewing, and sends the command to the GCE at 5 Hz. This command contains the number of gimbals steps for that ACS cycle, the direction of motion, the spacing of the steps, and the delay before taking the first step. The AIA and HMI instruments are sensitive to spacecraft jitter. Pre-flight analysis showed that jitter from the motion of the HGAs was a cause of concern. Three jitter mitigation techniques were developed to overcome the effects of jitter from different sources. The first method is the random step delay, which avoids gimbal steps hitting a cadence on a jitter-critical mode by pseudo-randomly delaying the first gimbal step in an ACS cycle. The second method of jitter mitigation is stagger stepping, which forbids the two antennas from taking steps during the same ACS cycle in order to avoid constructively adding jitter from two antennas. The third method is the inclusion of an instrument No Step Request (NSR), which allows the instruments to request a stoppage in gimbal stepping during the times when they are taking images. During the commissioning phase of the mission, a jitter test was performed onboard the spacecraft. Various sources of jitter, such as the reaction wheels, the High Gain Antenna motors, and the motion of the instrument filter wheels, were examined to determine the level of their effect on the instruments. During the HGA portion of the test, the jitter amplitudes from the single step of a gimbal were examined, as well as the amplitudes due to the execution of various gimbal rates. These jitter levels are compared with the gimbal jitter allocations for each instrument. Additionally, the jitter test provided insight into a readback delay that exists with the GCE. Pre-flight analysis suggested that gimbal steps scheduled to occur during the later portion of an ACS cycle would not be read during that cycle, resulting in a delay in the telemetered current gimbal position. Flight data from the jitter test confirmed this expectation. Analysis is presentehat shows the readback delay does not have a negative impact on gimbal control. The decision was made to consider implementing two of the jitter mitigation techniques on board the spacecraft: stagger stepping and the NSR. Flight data from two sets of handovers, one set without jitter mitigation and the other with mitigation enabled, were examined. The trajectory of the predicted handover was compared with the measured trajectory for the two cases, showing that tracking was not negatively impacted with the addition of the jitter mitigation techniques. Additionally, the individual gimbal steps were examined, and it was confirmed that the stagger stepping and NSRs worked as designed. An Image Quality Test was performed to determine the amount of cumulative jitter from the reaction wheels, HGAs, and instruments during various combinations of typical operations. In this paper, the flight results are examined from a test where the HGAs are following the path of a nominal handover with stagger stepping on and HMI NSRs enabled. In this case, the reaction wheels are moving at low speed and the instruments are taking pictures in their standard sequence. The flight data shows the level of jitter that the instruments see when their shutters are open. The HGA-induced jitter is well within the jitter requirement when the stagger step and NSR mitigation options are enabled. The SDO HGA pointing algorithm was designed to achieve nominal antenna pointing at the ground station, perform slews during handover season, and provide three HGA-induced jitter mitigation options without compromising pointing objectives. During the commissioning phase, flight data sets were collected to verify the HGA pointing algorithm and demonstrate its jitter mitigation capabilities.
Wang, Xiumei; Qin, Xiaoli; Li, Daoming; Yang, Bo; Wang, Yonghua
2017-07-01
This study reported a novel immobilized MAS1 lipase from marine Streptomyces sp. strain W007 for synthesizing high-yield biodiesel from waste cooking oils (WCO) with one-step addition of methanol in a solvent-free system. Immobilized MAS1 lipase was selected for the transesterification reactions with one-step addition of methanol due to its much more higher biodiesel yield (89.50%) when compared with the other three commercial immobilized lipases (<10%). The highest biodiesel yield (95.45%) was acquired with one-step addition of methanol under the optimized conditions. Moreover, it was observed that immobilized MAS1 lipase retained approximately 70% of its initial activity after being used for four batch cycles. Finally, the obtained biodiesel was further characterized using FT-IR, 1 H and 13 C NMR spectroscopy. These findings indicated that immobilized MAS1 lipase is a promising catalyst for biodiesel production from WCO with one-step addition of methanol under high methanol concentration. Copyright © 2017 Elsevier Ltd. All rights reserved.
Tangential blowing for control of strong normal shock - Boundary layer interactions on inlet ramps
NASA Technical Reports Server (NTRS)
Schwendemann, M. F.; Sanders, B. W.
1982-01-01
The use of tangential blowing from a row of holes in an aft facing step is found to provide good control of the ramp boundary layer, normal shock interaction on a fixed geometry inlet over a wide range of inlet mass flow ratios. Ramp Mach numbers of 1.36 and 1.96 are investigated. The blowing geometry is found to have a significant effect on system performance at the highest Mach number. The use of high-temperature air in the blowing system, however, has only a slight effect on performance. The required blowing rates are significantly high for the most severe test conditions. In addition, the required blowing coefficient is found to be proportional to the normal shock pressure rise.
A student guide to proofreading and writing in science.
Hyatt, Jon-Philippe K; Bienenstock, Elisa Jayne; Tilan, Jason U
2017-09-01
Scientific writing requires a distinct style and tone, whether the writing is intended for an undergraduate assignment or publication in a peer-reviewed journal. From the first to the final draft, scientific writing is an iterative process requiring practice, substantial feedback from peers and instructors, and comprehensive proofreading on the part of the writer. Teaching writing or proofreading is not common in university settings. Here, we present a collection of common undergraduate student writing mistakes and put forth suggestions for corrections as a first step toward proofreading and enhancing readability in subsequent draft versions. Additionally, we propose specific strategies pertaining to word choice, structure, and approach to make products more fluid and focused for an appropriate target audience. Copyright © 2017 the American Physiological Society.
Real-Time Aerodynamic Flow and Data Visualization in an Interactive Virtual Environment
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; Fleming, Gary A.
2005-01-01
Significant advances have been made to non-intrusive flow field diagnostics in the past decade. Camera based techniques are now capable of determining physical qualities such as surface deformation, surface pressure and temperature, flow velocities, and molecular species concentration. In each case, extracting the pertinent information from the large volume of acquired data requires powerful and efficient data visualization tools. The additional requirement for real time visualization is fueled by an increased emphasis on minimizing test time in expensive facilities. This paper will address a capability titled LiveView3D, which is the first step in the development phase of an in depth, real time data visualization and analysis tool for use in aerospace testing facilities.
Lunar and Martian environmental interactions with nuclear power system radiators
NASA Technical Reports Server (NTRS)
Perez-Davis, Marla E.; Gaier, James R.; Katzan, Cynthia M.
1992-01-01
Future NASA space missions include a permanent manned presence on the moon and an expedition to the planet Mars. Such steps will require careful consideration of environmental interactions in the selection and design of required power systems. Several environmental constituents may be hazardous to performance integrity. Potential threats common to both the moon and Mars are low ambient temperatures, wide daily temperature swings, solar flux, and large quantities of dust. The surface of Mars provides the additional challenges of dust storms, winds, and a carbon dioxide atmosphere. In this review, the anticipated environmental interactions with surface power system radiators are described, as well as the impacts of these interactions on radiator durability, which were identified at NASA Lewis Research Center.
Watson, Alice; Bickmore, Timothy; Cange, Abby; Kulshreshtha, Ambar; Kvedar, Joseph
2012-01-26
Addressing the obesity epidemic requires the development of effective, scalable interventions. Pedometers and Web-based programs are beneficial in increasing activity levels but might be enhanced by the addition of nonhuman coaching. We hypothesized that a virtual coach would increase activity levels, via step count, in overweight or obese individuals beyond the effect observed using a pedometer and website alone. We recruited 70 participants with a body mass index (BMI) between 25 and 35 kg/m(2) from the Boston metropolitan area. Participants were assigned to one of two study arms and asked to wear a pedometer and access a website to view step counts. Intervention participants also met with a virtual coach, an automated, animated computer agent that ran on their home computers, set goals, and provided personalized feedback. Data were collected and analyzed in 2008. The primary outcome measure was change in activity level (percentage change in step count) over the 12-week study, split into four 3-week time periods. Major secondary outcomes were change in BMI and participants' satisfaction. The mean age of participants was 42 years; the majority of participants were female (59/70, 84%), white (53/70, 76%), and college educated (68/70, 97%). Of the initial 70 participants, 62 completed the study. Step counts were maintained in intervention participants but declined in controls. The percentage change in step count between those in the intervention and control arms, from the start to the end, did not reach the threshold for significance (2.9% vs -12.8% respectively, P = .07). However, repeated measures analysis showed a significant difference when comparing percentage changes in step counts between control and intervention participants over all time points (analysis of variance, P = .02). There were no significant changes in secondary outcome measures. The virtual coach was beneficial in maintaining activity level. The long-term benefits and additional applications of this technology warrant further study. ClinicalTrials.gov NCT00792207; http://clinicaltrials.gov/ct2/show/NCT00792207 (Archived by WebCite at http://www.webcitation.org/63sm9mXUD).
Enriching step-based product information models to support product life-cycle activities
NASA Astrophysics Data System (ADS)
Sarigecili, Mehmet Ilteris
The representation and management of product information in its life-cycle requires standardized data exchange protocols. Standard for Exchange of Product Model Data (STEP) is such a standard that has been used widely by the industries. Even though STEP-based product models are well defined and syntactically correct, populating product data according to these models is not easy because they are too big and disorganized. Data exchange specifications (DEXs) and templates provide re-organized information models required in data exchange of specific activities for various businesses. DEXs show us it would be possible to organize STEP-based product models in order to support different engineering activities at various stages of product life-cycle. In this study, STEP-based models are enriched and organized to support two engineering activities: materials information declaration and tolerance analysis. Due to new environmental regulations, the substance and materials information in products have to be screened closely by manufacturing industries. This requires a fast, unambiguous and complete product information exchange between the members of a supply chain. Tolerance analysis activity, on the other hand, is used to verify the functional requirements of an assembly considering the worst case (i.e., maximum and minimum) conditions for the part/assembly dimensions. Another issue with STEP-based product models is that the semantics of product data are represented implicitly. Hence, it is difficult to interpret the semantics of data for different product life-cycle phases for various application domains. OntoSTEP, developed at NIST, provides semantically enriched product models in OWL. In this thesis, we would like to present how to interpret the GD & T specifications in STEP for tolerance analysis by utilizing OntoSTEP.
A continuous damage model based on stepwise-stress creep rupture tests
NASA Technical Reports Server (NTRS)
Robinson, D. N.
1985-01-01
A creep damage accumulation model is presented that makes use of the Kachanov damage rate concept with a provision accounting for damage that results from a variable stress history. This is accomplished through the introduction of an additional term in the Kachanov rate equation that is linear in the stress rate. Specification of the material functions and parameters in the model requires two types of constituting a data base: (1) standard constant-stress creep rupture tests, and (2) a sequence of two-step creep rupture tests.
Rep. Sanford, Mark [R-SC-1
2013-10-30
House - 10/30/2013 Referred to the Committee on Intelligence (Permanent Select), and in addition to the Committee on Oversight and Government Reform, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the... (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogucz, Edward A.
Healthy buildings provide high indoor environmental quality for occupants while simultaneously reducing energy consumption. This project advanced the development and marketability of envisioned healthy, energy-efficient buildings through studies that evaluated the use of emerging technologies in commercial and residential buildings. The project also provided resources required for homebuilders to participate in DOE’s Builders Challenge, concomitant with the goal to reduce energy consumption in homes by at least 30% as a first step toward achieving envisioned widespread availability of net-zero energy homes by 2030. In addition, the project included outreach and education concerning energy efficiency in buildings.
OCLC for the hospital library: the justification plan for hospital administration.
Allen, C W; Branson, J R
1982-07-01
This paper delineates the necessary steps to provide hospital administrators with the information needed to evaluate an automated system, OCLC, for addition to the medical library. Based on experience at the Norton-Children's Hospitals, included are: (1) cost analyses of present technical processing systems and cost comparisons with OCLC; (2) delineation of start-up costs for installing OCLC; (3) budgetary requirements for 1981; (4) the impact of automation on library systems, personnel, and services; (5) potential as a shared service; and (6) preparation of the proposal for administrative review.
OCLC for the hospital library: the justification plan for hospital administration.
Allen, C W; Branson, J R
1982-01-01
This paper delineates the necessary steps to provide hospital administrators with the information needed to evaluate an automated system, OCLC, for addition to the medical library. Based on experience at the Norton-Children's Hospitals, included are: (1) cost analyses of present technical processing systems and cost comparisons with OCLC; (2) delineation of start-up costs for installing OCLC; (3) budgetary requirements for 1981; (4) the impact of automation on library systems, personnel, and services; (5) potential as a shared service; and (6) preparation of the proposal for administrative review. PMID:7116018
Financing and cash flow management for the medical group practice.
Bert, Andrew J
2008-01-01
The expansion of a medical group practice and the addition of ancillary services require a substantial cash outlay. Obtaining proper financing to complete a successful expansion is a process that takes time, and there are critical steps that must be followed. The group's business objectives must be presented properly by developing a business plan detailing the practice and goals associated with the desired expansion. This article discusses some of the key elements that are essential in creating an overall effective business plan for the group medical practice.
Rep. Foster, Bill [D-IL-11
2018-05-23
House - 05/23/2018 Referred to the Committee on Armed Services, and in addition to the Committees on Foreign Affairs, and Intelligence (Permanent Select), for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the... (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Contamination control program for the Extreme Ultraviolet Explorer instruments
NASA Technical Reports Server (NTRS)
Ray, David C.; Malina, Roger F.; Welsh, Barry Y.; Austin, James D.; Teti, Bonnie Gray
1989-01-01
A contamination-control program has been instituted for the optical components of the EUV Explorer satellite, whose 80-900 A range performance is easily degraded by particulate and molecular contamination. Cleanliness requirements have been formulated for the design, fabrication, and test phases of these instruments; in addition, contamination-control steps have been taken which prominently include the isolation of sensitive components in a sealed optics cavity. Prelaunch monitoring systems encompass the use of quartz crystal microbalances, particle witness plates, direct flight hardware sampling, and optical witness sampling of EUV scattering and reflectivity.
Shorofsky, Stephen R; Peters, Robert W; Rashba, Eric J; Gold, Michael R
2004-02-01
Determination of DFT is an integral part of ICD implantation. Two commonly used methods of DFT determination, the step-down method and the binary search method, were compared in 44 patients undergoing ICD testing for standard clinical indications. The step-down protocol used an initial shock of 18 J. The binary search method began with a shock energy of 9 J and successive shock energies were increased or decreased depending on the success of the previous shock. The DFT was defined as the lowest energy that successfully terminated ventricular fibrillation. The binary search method has the advantage of requiring a predetermined number of shocks, but some have questioned its accuracy. The study found that (mean) DFT obtained by the step-down method was 8.2 +/- 5.0, whereas by the binary search method DFT was 8.1 +/- 0.7 J, P = NS. DFT differed by no more than one step between methods in 32 (71%) of patients. The number of shocks required to determine DFT by the step-down method was 4.6 +/- 1.4, whereas by definition, the binary search method always required three shocks. In conclusion, the binary search method is preferable because it is of comparable efficacy and requires fewer shocks.
Schrock, John B; Kraeutler, Matthew J; Dayton, Michael R; McCarty, Eric C
2017-06-01
The purpose of this study was to analyze how program directors (PDs) of orthopaedic surgery residency programs use United States Medical Licensing Examination (USMLE) Step 1 and 2 scores in screening residency applicants. A survey was sent to each allopathic orthopaedic surgery residency PD. PDs were asked if they currently use minimum Step 1 and/or 2 scores in screening residency applicants and if these criteria have changed in recent years. Responses were received from 113 of 151 PDs (75%). One program did not have the requested information and five declined participation, leaving 107 responses analyzed. Eighty-nine programs used a minimum USMLE Step 1 score (83%). Eighty-three programs (78%) required a Step 1 score ≥210, 80 (75%) required a score ≥220, 57 (53%) required a score ≥230, and 22 (21%) required a score ≥240. Multiple PDs mentioned the high volume of applications as a reason for using a minimum score and for increasing the minimum score in recent years. A large proportion of orthopaedic surgery residency PDs use a USMLE Step 1 minimum score when screening applications in an effort to reduce the number of applications to be reviewed.
S-World: A high resolution global soil database for simulation modelling (Invited)
NASA Astrophysics Data System (ADS)
Stoorvogel, J. J.
2013-12-01
There is an increasing call for high resolution soil information at the global level. A good example for such a call is the Global Gridded Crop Model Intercomparison carried out within AgMIP. While local studies can make use of surveying techniques to collect additional techniques this is practically impossible at the global level. It is therefore important to rely on legacy data like the Harmonized World Soil Database. Several efforts do exist that aim at the development of global gridded soil property databases. These estimates of the variation of soil properties can be used to assess e.g., global soil carbon stocks. However, they do not allow for simulation runs with e.g., crop growth simulation models as these models require a description of the entire pedon rather than a few soil properties. This study provides the required quantitative description of pedons at a 1 km resolution for simulation modelling. It uses the Harmonized World Soil Database (HWSD) for the spatial distribution of soil types, the ISRIC-WISE soil profile database to derive information on soil properties per soil type, and a range of co-variables on topography, climate, and land cover to further disaggregate the available data. The methodology aims to take stock of these available data. The soil database is developed in five main steps. Step 1: All 148 soil types are ordered on the basis of their expected topographic position using e.g., drainage, salinization, and pedogenesis. Using the topographic ordering and combining the HWSD with a digital elevation model allows for the spatial disaggregation of the composite soil units. This results in a new soil map with homogeneous soil units. Step 2: The ranges of major soil properties for the topsoil and subsoil of each of the 148 soil types are derived from the ISRIC-WISE soil profile database. Step 3: A model of soil formation is developed that focuses on the basic conceptual question where we are within the range of a particular soil property at a particular location given a specific soil type. The soil properties are predicted for each grid cell based on the soil type, the corresponding ranges of soil properties, and the co-variables. Step 4: Standard depth profiles are developed for each of the soil types using the diagnostic criteria of the soil types and soil profile information from the ISRIC-WISE database. The standard soil profiles are combined with the the predicted values for the topsoil and subsoil yielding unique soil profiles at each location. Step 5: In a final step, additional soil properties are added to the database using averages for the soil types and pedo-transfer functions. The methodology, denominated S-World (Soils of the World), results in readily available global maps with quantitative pedon data for modelling purposes. It forms the basis for the Global Gridded Crop Model Intercomparison carried out within AgMIP.
Yuan, Fusong; Lv, Peijun; Wang, Dangxiao; Wang, Lei; Sun, Yuchun; Wang, Yong
2015-02-01
The purpose of this study was to establish a depth-control method in enamel-cavity ablation by optimizing the timing of the focal-plane-normal stepping and the single-step size of a three axis, numerically controlled picosecond laser. Although it has been proposed that picosecond lasers may be used to ablate dental hard tissue, the viability of such a depth-control method in enamel-cavity ablation remains uncertain. Forty-two enamel slices with approximately level surfaces were prepared and subjected to two-dimensional ablation by a picosecond laser. The additive-pulse layer, n, was set to 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70. A three-dimensional microscope was then used to measure the ablation depth, d, to obtain a quantitative function relating n and d. Six enamel slices were then subjected to three dimensional ablation to produce 10 cavities, respectively, with additive-pulse layer and single-step size set to corresponding values. The difference between the theoretical and measured values was calculated for both the cavity depth and the ablation depth of a single step. These were used to determine minimum-difference values for both the additive-pulse layer (n) and single-step size (d). When the additive-pulse layer and the single-step size were set 5 and 45, respectively, the depth error had a minimum of 2.25 μm, and 450 μm deep enamel cavities were produced. When performing three-dimensional ablating of enamel with a picosecond laser, adjusting the timing of the focal-plane-normal stepping and the single-step size allows for the control of ablation-depth error to the order of micrometers.
Ohrt, Thomas; Odenwälder, Peter; Dannenberg, Julia; Prior, Mira; Warkocki, Zbigniew; Schmitzová, Jana; Karaduman, Ramazan; Gregor, Ingo; Enderlein, Jörg; Fabrizio, Patrizia; Lührmann, Reinhard
2013-01-01
Step 2 catalysis of pre-mRNA splicing entails the excision of the intron and ligation of the 5′ and 3′ exons. The tasks of the splicing factors Prp16, Slu7, Prp18, and Prp22 in the formation of the step 2 active site of the spliceosome and in exon ligation, and the timing of their recruitment, remain poorly understood. Using a purified yeast in vitro splicing system, we show that only the DEAH-box ATPase Prp16 is required for formation of a functional step 2 active site and for exon ligation. Efficient docking of the 3′ splice site (3′SS) to the active site requires only Slu7/Prp18 but not Prp22. Spliceosome remodeling by Prp16 appears to be subtle as only the step 1 factor Cwc25 is dissociated prior to step 2 catalysis, with its release dependent on docking of the 3′SS to the active site and Prp16 action. We show by fluorescence cross-correlation spectroscopy that Slu7/Prp18 and Prp16 bind early to distinct, low-affinity binding sites on the step-1-activated B* spliceosome, which are subsequently converted into high-affinity sites. Our results shed new light on the factor requirements for step 2 catalysis and the dynamics of step 1 and 2 factors during the catalytic steps of splicing. PMID:23685439
Systems Maintenance Automated Repair Tasks (SMART)
NASA Technical Reports Server (NTRS)
Schuh, Joseph; Mitchell, Brent; Locklear, Louis; Belson, Martin A.; Al-Shihabi, Mary Jo Y.; King, Nadean; Norena, Elkin; Hardin, Derek
2010-01-01
SMART is a uniform automated discrepancy analysis and repair-authoring platform that improves technical accuracy and timely delivery of repair procedures for a given discrepancy (see figure a). SMART will minimize data errors, create uniform repair processes, and enhance the existing knowledge base of engineering repair processes. This innovation is the first tool developed that links the hardware specification requirements with the actual repair methods, sequences, and required equipment. SMART is flexibly designed to be useable by multiple engineering groups requiring decision analysis, and by any work authorization and disposition platform (see figure b). The organizational logic creates the link between specification requirements of the hardware, and specific procedures required to repair discrepancies. The first segment in the SMART process uses a decision analysis tree to define all the permutations between component/ subcomponent/discrepancy/repair on the hardware. The second segment uses a repair matrix to define what the steps and sequences are for any repair defined in the decision tree. This segment also allows for the selection of specific steps from multivariable steps. SMART will also be able to interface with outside databases and to store information from them to be inserted into the repair-procedure document. Some of the steps will be identified as optional, and would only be used based on the location and the current configuration of the hardware. The output from this analysis would be sent to a work authoring system in the form of a predefined sequence of steps containing required actions, tools, parts, materials, certifications, and specific requirements controlling quality, functional requirements, and limitations.
Paquette, Maxime R; Fuller, Jason R; Adkin, Allan L; Vallis, Lori Ann
2008-09-01
This study investigated the effects of altering the base of support (BOS) at the turn point on anticipatory locomotor adjustments during voluntary changes in travel direction in healthy young and older adults. Participants were required to walk at their preferred pace along a 3-m straight travel path and continue to walk straight ahead or turn 40 degrees to the left or right for an additional 2-m. The starting foot and occasionally the gait starting point were adjusted so that participants had to execute the turn using a cross-over step with a narrow BOS or a lead-out step with a wide BOS. Spatial and temporal gait variables, magnitudes of angular segmental movement, and timing and sequencing of body segment reorientation were similar despite executing the turn with a narrow or wide BOS. A narrow BOS during turning generated an increased step width in the step prior to the turn for both young and older adults. Age-related changes when turning included reduced step velocity and step length for older compared to young adults. Age-related changes in the timing and sequencing of body segment reorientation prior to the turn point were also observed. A reduction in walking speed and an increase in step width just prior to the turn, combined with a delay in motion of the center of mass suggests that older adults used a more cautious combined foot placement and hip strategy to execute changes in travel direction compared to young adults. The results of this study provide insight into mobility constraints during a common locomotor task in older adults.
Validation of Helicopter Gear Condition Indicators Using Seeded Fault Tests
NASA Technical Reports Server (NTRS)
Dempsey, Paula; Brandon, E. Bruce
2013-01-01
A "seeded fault test" in support of a rotorcraft condition based maintenance program (CBM), is an experiment in which a component is tested with a known fault while health monitoring data is collected. These tests are performed at operating conditions comparable to operating conditions the component would be exposed to while installed on the aircraft. Performance of seeded fault tests is one method used to provide evidence that a Health Usage Monitoring System (HUMS) can replace current maintenance practices required for aircraft airworthiness. Actual in-service experience of the HUMS detecting a component fault is another validation method. This paper will discuss a hybrid validation approach that combines in service-data with seeded fault tests. For this approach, existing in-service HUMS flight data from a naturally occurring component fault will be used to define a component seeded fault test. An example, using spiral bevel gears as the targeted component, will be presented. Since the U.S. Army has begun to develop standards for using seeded fault tests for HUMS validation, the hybrid approach will be mapped to the steps defined within their Aeronautical Design Standard Handbook for CBM. This paper will step through their defined processes, and identify additional steps that may be required when using component test rig fault tests to demonstrate helicopter CI performance. The discussion within this paper will provide the reader with a better appreciation for the challenges faced when defining a seeded fault test for HUMS validation.
NASA Astrophysics Data System (ADS)
Seyfried, Daniel; Schubert, Karsten; Schoebel, Joerg
2014-12-01
Employing a continuous-wave radar system, with the stepped-frequency radar being one type of this class, all reflections from the environment are present continuously and simultaneously at the receiver. Utilizing such a radar system for Ground Penetrating Radar purposes, antenna cross-talk and ground bounce reflection form an overall dominant signal contribution while reflections from objects buried in the ground are of quite weak amplitude due to attenuation in the ground. This requires a large dynamic range of the receiver which in turn requires high sensitivity of the radar system. In this paper we analyze the sensitivity of our vector network analyzer utilized as stepped-frequency radar system for GPR pipe detection. We furthermore investigate the performance of increasing the sensitivity of the radar by means of appropriate averaging and low-noise pre-amplification of the received signal. It turns out that the improvement in sensitivity actually achievable may differ significantly from theoretical expectations. In addition, we give a descriptive explanation why our appropriate experiments demonstrate that the sensitivity of the receiver is independent of the distance between the target object and the source of dominant signal contribution. Finally, our investigations presented in this paper lead to a preferred setting of operation for our vector network analyzer in order to achieve best detection capability for weak reflection amplitudes, hence making the radar system applicable for Ground Penetrating Radar purposes.
Development of DKB ETL module in case of data conversion
NASA Astrophysics Data System (ADS)
Kaida, A. Y.; Golosova, M. V.; Grigorieva, M. A.; Gubin, M. Y.
2018-05-01
Modern scientific experiments involve the producing of huge volumes of data that requires new approaches in data processing and storage. These data themselves, as well as their processing and storage, are accompanied by a valuable amount of additional information, called metadata, distributed over multiple informational systems and repositories, and having a complicated, heterogeneous structure. Gathering these metadata for experiments in the field of high energy nuclear physics (HENP) is a complex issue, requiring the quest for solutions outside the box. One of the tasks is to integrate metadata from different repositories into some kind of a central storage. During the integration process, metadata taken from original source repositories go through several processing steps: metadata aggregation, transformation according to the current data model and loading it to the general storage in a standardized form. The R&D project of ATLAS experiment on LHC, Data Knowledge Base, is aimed to provide fast and easy access to significant information about LHC experiments for the scientific community. The data integration subsystem, being developed for the DKB project, can be represented as a number of particular pipelines, arranging data flow from data sources to the main DKB storage. The data transformation process, represented by a single pipeline, can be considered as a number of successive data transformation steps, where each step is implemented as an individual program module. This article outlines the specifics of program modules, used in the dataflow, and describes one of the modules developed and integrated into the data integration subsystem of DKB.
Contrast enhanced imaging with a stationary digital breast tomosynthesis system
NASA Astrophysics Data System (ADS)
Puett, Connor; Calliste, Jabari; Wu, Gongting; Inscoe, Christina R.; Lee, Yueh Z.; Zhou, Otto; Lu, Jianping
2017-03-01
Digital breast tomosynthesis (DBT) captures some depth information and thereby improves the conspicuity of breast lesions, compared to standard mammography. Using contrast during DBT may also help distinguish malignant from benign sites. However, adequate visualization of the low iodine signal requires a subtraction step to remove background signal and increase lesion contrast. Additionally, attention to factors that limit contrast, including scatter, noise, and artifact, are important during the image acquisition and post-acquisition processing steps. Stationary DBT (sDBT) is an emerging technology that offers a higher spatial and temporal resolution than conventional DBT. This phantom-based study explored contrast-enhanced sDBT (CE sDBT) across a range of clinically-appropriate iodine concentrations, lesion sizes, and breast thicknesses. The protocol included an effective scatter correction method and an iterative reconstruction technique that is unique to the sDBT system. The study demonstrated the ability of this CE sDBT system to collect projection images adequate for both temporal subtraction (TS) and dual-energy subtraction (DES). Additionally, the reconstruction approach preserved the improved contrast-to-noise ratio (CNR) achieved in the subtraction step. Finally, scatter correction increased the iodine signal and CNR of iodine-containing regions in projection views and reconstructed image slices during both TS and DES. These findings support the ongoing study of sDBT as a potentially useful tool for contrast-enhanced breast imaging and also highlight the significant effect that scatter has on image quality during DBT.
A Combined Experimental and Numerical Approach to the Laser Joining of Hybrid Polymer - Metal Parts
NASA Astrophysics Data System (ADS)
Rodríguez-Vidal, E.; Lambarri, J.; Soriano, C.; Sanz, C.; Verhaeghe, G.
A two-step method for the joining of opaque polymer to metal is presented. Firstly, the metal is structured locally on a micro-scale level, to ensure adhesion with the polymeric counterpart. In a second step, the opposite side of the micro-structured metal is irradiated by means of a laser source. The heat thereby created is conducted by the metal and results in the melting of the polymer at the interface. The polymer thereby adheres to the metal and flows into the previously engraved structures, creating an additional mechanical interlock between the two materials. The welding parameters are fine-tuned with the assistance of a finite element model, to ensure the required interface temperature. The method is illustrated using a dual phase steel joined to a fiber-reinforced polyamide. The effect of different microstructures, in particular geometry and cavity aspect ratio, on the joint's tensile-shear mechanical performance is discussed.
Brykala, M; Deptula, A; Rogowski, M; Lada, W; Olczak, T; Wawszczak, D; Smolinski, T; Wojtowicz, P; Modolo, G
A new method for synthesis of uranium oxide microspheres (diameter <100 μm) has been developed. It is a variant of our patented Complex Sol-Gel Process, which has been used to synthesize high-quality powders of a wide variety of complex oxides. Starting uranyl-nitrate-ascorbate sols were prepared by addition of ascorbic acid to uranyl nitrate hexahydrate solution and alkalizing by aqueous ammonium hydroxide and then emulsified in 2-ethylhexanol-1 containing 1v/o SPAN-80. Drops of emulsion were firstly gelled by extraction of water by the solvent. Destruction of the microspheres during thermal treatment, owing to highly reactive components in the gels, requires modification of the gelation step by Double Extraction Process-simultaneously extraction of water and nitrates using Primene JMT, which completely eliminates these problem. Final step was calcination in air of obtained microspheres of gels to triuranium octaoxide.
Analytic methods for design of wave cycles for wave rotor core engines
NASA Technical Reports Server (NTRS)
Resler, Edwin L., Jr.; Mocsari, Jeffrey C.; Nalim, M. R.
1993-01-01
A procedure to design a preliminary wave rotor cycle for any application is presented. To complete a cycle with heat addition there are two separate but related design steps that must be followed. The 'wave' boundary conditions determine the allowable amount of heat added in any case and the ensuing wave pattern requires certain pressure discharge conditions to allow the process to be made cyclic. This procedure, when applied, gives a first estimate of the cycle performance and the necessary information for the next step in the design process, namely the application of a characteristic based or other appropriate detailed one dimensional wave calculation that locates the proper porting around the periphery of the wave rotor. Four examples of the design procedure are given to demonstrate its utility and generality. These examples also illustrate the large gains in performance that could be realized with the use of wave rotor enhanced propulsion cycles.
Development of Safe and Effective Botanical Dietary Supplements
2015-01-01
Regulated differently than drugs or foods, the market for botanical dietary supplements continues to grow worldwide. The recently implemented U.S. FDA regulation that all botanical dietary supplements must be produced using good manufacturing practice is an important step toward enhancing the safety of these products, but additional safeguards could be implemented, and unlike drugs, there are currently no efficacy requirements. To ensure a safe and effective product, botanical dietary supplements should be developed in a manner analogous to pharmaceuticals that involves identification of mechanisms of action and active constituents, chemical standardization based on the active compounds, biological standardization based on pharmacological activity, preclinical evaluation of toxicity and potential for drug–botanical interactions, metabolism of active compounds, and finally, clinical studies of safety and efficacy. Completing these steps will enable the translation of botanicals from the field to safe human use as dietary supplements. PMID:26125082
Bose, Ranjita K; Lau, Kenneth K S
2010-08-09
In this work, poly(2-hydroxyethyl methacrylate) (PHEMA), a widely used hydrogel, is synthesized using initiated chemical vapor deposition (iCVD), a one-step surface polymerization that does not use any solvents. iCVD synthesis is capable of producing linear stoichiometric polymers that are free from entrained unreacted monomer or solvent and, thus, do not require additional purification steps. The resulting films, therefore, are found to be noncytotoxic and also have low nonspecific protein adsorption. The kinetics of iCVD polymerization are tuned so as to achieve rapid deposition rates ( approximately 1.5 microm/min), which in turn yield ultrahigh molecular weight polymer films that are mechanically robust with good water transport and swellability. The films have an extremely high degree of physical chain entanglement giving rise to high tensile modulus and storage modulus without the need for chemical cross-linking that compromises hydrophilicity.
Apparatus for mixing solutions in low gravity environments
NASA Technical Reports Server (NTRS)
Carter, Daniel C. (Inventor); Broom, Mary B. (Inventor)
1990-01-01
An apparatus is disclosed for allowing mixing of solutions in low gravity environments so as to carry out crystallization of proteins and other small molecules or other chemical syntheses, under conditions that maximize crystal growth and minimize disruptive turbulent effects. The apparatus is comprised of a housing, a plurality of chambers, and a cylindrical rotatable valve disposed between at least two of the chambers, said valve having an internal passageway so as to allow fluid movement between the chambers by rotation of the valve. In an alternate embodiment of the invention, a valve is provided having an additional internal passage way so that fluid from a third chamber can be mixed with the fluids of the first two chambers. This alternate embodiment of the invention is particularly desirable when it is necessary to provide a termination step to the crystal growth, or if a second synthetic step is required.
Laser interferometer for space-based mapping of Earth's gravity field
NASA Astrophysics Data System (ADS)
Dehne, Marina; Sheard, Benjamin; Gerberding, Oliver; Mahrdt, Christoph; Heinzel, Gerhard; Danzmann, Karsten
2010-05-01
Laser interferometry will play a key role in the next generation of GRACE-type satellite gravity missions. The measurement concepts for future missions include a heterodyne laser interferometer. Furthermore, it is favourable to use polarising components in the laser interferometer for beam splitting. In the first step the influence of these components on the interferometer sensitivity has been investigated. Additionally, a length stability on a nm-scale has been validated. The next step will include a performance test of an interferometric SST system in an active symmetric transponder setup including two lasers and two optical benches. The design and construction of a quasi-monolithic interferometer for comparing the interferometric performance of non-polarising and polarising optics will be discussed. The results of the interferometric readout of a heterodyne configuration together with polarising optics will be presented to fulfil the phase sensitivity requirement of 1nm/√Hz-- for a typical SSI scenario.
Czajkowski, Robert; Ozymko, Zofia; Lojkowska, Ewa
2016-01-01
This is the first report describing precipitation of bacteriophage particles with zinc chloride as a method of choice to isolate infectious lytic bacteriophages against Pectobacterium spp. and Dickeya spp. from environmental samples. The isolated bacteriophages are ready to use to study various (ecological) aspects of bacteria-bacteriophage interactions. The method comprises the well-known precipitation of phages from aqueous extracts of the test material by addition of ZnCl2, resuscitation of bacteriophage particles in Ringer's buffer to remove the ZnCl2 excess and a soft agar overlay assay with the host bacterium to isolate infectious individual phage plaques. The method requires neither an enrichment step nor other steps (e. g., PEG precipitation, ultrafiltration, or ultracentrifugation) commonly used in other procedures and results in isolation of active viable bacteriophage particles.
Trends in Baby-Friendly® Care in the United States: Historical Influences on Contemporary Care.
Salera-Vieira, Jean; Zembo, Cynthia T
2016-01-01
The protection that breast-feeding affords both mother and infant against acute and chronic illness is well documented. The grassroots, public health, and governmental supports for breast-feeding have influenced changes in maternal and newborn care. History indicates that the additional influence has come in the form of governmental workshops and initiatives, professional organizations, as well as The Joint Commission. This includes the influence that the Baby-Friendly® Hospital Initiative and the Ten Steps to Successful Breastfeeding have had on infant care throughout the years. The requirements that hospitals must follow to implement all, or some, of the Ten Steps lead to change in care that not only increases breast-feeding rates but also leads to health improvements. This article reviews how an upward trend in the adoption of Baby-Friendly practices to support breast-feeding impacts infant care.
Sagnella, Sharon M; Gong, Xiaojuan; Moghaddam, Minoo J; Conn, Charlotte E; Kimpton, Kathleen; Waddington, Lynne J; Krodkiewska, Irena; Drummond, Calum J
2011-03-01
We demonstrate that oral delivery of self-assembled nanostructured nanoparticles consisting of 5-fluorouracil (5-FU) lipid prodrugs results in a highly effective, target-activated, chemotherapeutic agent, and offers significantly enhanced efficacy over a commercially available alternative that does not self-assemble. The lipid prodrug nanoparticles have been found to significantly slow the growth of a highly aggressive mouse 4T1 breast tumour, and essentially halt the growth of a human MDA-MB-231 breast tumour in mouse xenografts. Systemic toxicity is avoided as prodrug activation requires a three-step, enzymatic conversion to 5-FU, with the third step occurring preferentially at the tumour site. Additionally, differences in the lipid prodrug chemical structure and internal nanostructure of the nanoparticle dictate the enzymatic conversion rate and can be used to control sustained release profiles. Thus, we have developed novel oral nanomedicines that combine sustained release properties with target-selective activation.
NASA Astrophysics Data System (ADS)
Mansournia, Mohammadreza; Arbabi, Akram
2017-01-01
Shape control of inorganic nanostructures generally requires using surfactants or ligands to passivate certain crystallographic planes. This paper describes a novel additive-free synthesis of cupric oxide nanostructures with different morphologies from the aqueous solutions of copper(II) with Cl-, NO3 -, and SO4 2- as counter ions. Through a one-step approach, CuO nanoleaves, nanoparticles and flower-like microspheres were directly synthesized at 80°C upon exposure to ammonia vapor using a cupric solution as a single precursor. Furthermore, during a two-step process, Cu(OH)2 nanofibers and nanorods were prepared under an ammonia atmosphere, then converted to CuO nanostructures with morphology preservation by heat treatment in air. The as-prepared Cu(OH)2 and CuO nanostructures are characterized using x-ray diffraction, scanning electron microscopy and Fourier transformation infrared spectroscopy techniques.
Adaptive evolution of complex innovations through stepwise metabolic niche expansion.
Szappanos, Balázs; Fritzemeier, Jonathan; Csörgő, Bálint; Lázár, Viktória; Lu, Xiaowen; Fekete, Gergely; Bálint, Balázs; Herczeg, Róbert; Nagy, István; Notebaart, Richard A; Lercher, Martin J; Pál, Csaba; Papp, Balázs
2016-05-20
A central challenge in evolutionary biology concerns the mechanisms by which complex metabolic innovations requiring multiple mutations arise. Here, we propose that metabolic innovations accessible through the addition of a single reaction serve as stepping stones towards the later establishment of complex metabolic features in another environment. We demonstrate the feasibility of this hypothesis through three complementary analyses. First, using genome-scale metabolic modelling, we show that complex metabolic innovations in Escherichia coli can arise via changing nutrient conditions. Second, using phylogenetic approaches, we demonstrate that the acquisition patterns of complex metabolic pathways during the evolutionary history of bacterial genomes support the hypothesis. Third, we show how adaptation of laboratory populations of E. coli to one carbon source facilitates the later adaptation to another carbon source. Our work demonstrates how complex innovations can evolve through series of adaptive steps without the need to invoke non-adaptive processes.
Adaptive evolution of complex innovations through stepwise metabolic niche expansion
Szappanos, Balázs; Fritzemeier, Jonathan; Csörgő, Bálint; Lázár, Viktória; Lu, Xiaowen; Fekete, Gergely; Bálint, Balázs; Herczeg, Róbert; Nagy, István; Notebaart, Richard A.; Lercher, Martin J.; Pál, Csaba; Papp, Balázs
2016-01-01
A central challenge in evolutionary biology concerns the mechanisms by which complex metabolic innovations requiring multiple mutations arise. Here, we propose that metabolic innovations accessible through the addition of a single reaction serve as stepping stones towards the later establishment of complex metabolic features in another environment. We demonstrate the feasibility of this hypothesis through three complementary analyses. First, using genome-scale metabolic modelling, we show that complex metabolic innovations in Escherichia coli can arise via changing nutrient conditions. Second, using phylogenetic approaches, we demonstrate that the acquisition patterns of complex metabolic pathways during the evolutionary history of bacterial genomes support the hypothesis. Third, we show how adaptation of laboratory populations of E. coli to one carbon source facilitates the later adaptation to another carbon source. Our work demonstrates how complex innovations can evolve through series of adaptive steps without the need to invoke non-adaptive processes. PMID:27197754
ChIA-PET2: a versatile and flexible pipeline for ChIA-PET data analysis
Li, Guipeng; Chen, Yang; Snyder, Michael P.; Zhang, Michael Q.
2017-01-01
ChIA-PET2 is a versatile and flexible pipeline for analyzing different types of ChIA-PET data from raw sequencing reads to chromatin loops. ChIA-PET2 integrates all steps required for ChIA-PET data analysis, including linker trimming, read alignment, duplicate removal, peak calling and chromatin loop calling. It supports different kinds of ChIA-PET data generated from different ChIA-PET protocols and also provides quality controls for different steps of ChIA-PET analysis. In addition, ChIA-PET2 can use phased genotype data to call allele-specific chromatin interactions. We applied ChIA-PET2 to different ChIA-PET datasets, demonstrating its significantly improved performance as well as its ability to easily process ChIA-PET raw data. ChIA-PET2 is available at https://github.com/GuipengLi/ChIA-PET2. PMID:27625391
Steps Toward Understanding Mitochondrial Fe/S Cluster Biogenesis.
Melber, Andrew; Winge, Dennis R
2018-01-01
Iron-sulfur clusters (Fe/S clusters) are essential cofactors required throughout the clades of biology for performing a myriad of unique functions including nitrogen fixation, ribosome assembly, DNA repair, mitochondrial respiration, and metabolite catabolism. Although Fe/S clusters can be synthesized in vitro and transferred to a client protein without enzymatic assistance, biology has evolved intricate mechanisms to assemble and transfer Fe/S clusters within the cellular environment. In eukaryotes, the foundation of all cellular clusters starts within the mitochondria. The focus of this review is to detail the mitochondrial Fe/S biogenesis (ISC) pathway along with the Fe/S cluster transfer steps necessary to mature Fe/S proteins. New advances in our understanding of the mitochondrial Fe/S biogenesis machinery will be highlighted. Additionally, we will address various experimental approaches that have been successful in the identification and characterization of components of the ISC pathway. © 2018 Elsevier Inc. All rights reserved.
van Boven, Job F. M.; Maguire, Terence; Goyal, Pankaj; Altman, Pablo
2016-01-01
The aim of this paper was to propose key steps for community pharmacist integration into a patient care pathway for chronic obstructive pulmonary disease (COPD) management. A literature search was conducted to identify publications focusing on the role of the community pharmacist in identification and management of COPD. The literature search highlighted evidence supporting an important role for pharmacists at each of the four key steps in the patient care pathway for COPD management. Step 1 (primary prevention): pharmacists are ideally placed to provide information on disease awareness and risk prevention campaigns, and to encourage lifestyle interventions, including smoking cessation. Step 2 (early detection/case finding): pharmacists are often the first point of contact between the patient and the healthcare system and can therefore play an important role in the early identification of patients with COPD. Step 3 (management and ongoing support): pharmacists can assist patients by providing advice and education on dosage, inhaler technique, treatment expectations and the importance of adherence, and by supporting self‐management, including recognition and treatment of COPD exacerbations. Step 4 (review and follow‐up): pharmacists can play an important role in monitoring adherence and ongoing inhaler technique in patients with COPD. In summary, pharmacists are ideally positioned to play a vital role in all key stages of an integrated COPD patient care pathway from early disease detection to the support of management plans, including advice and counselling regarding medications, inhaler technique and treatment adherence. Areas requiring additional consideration include pharmacist training, increasing awareness of the pharmacist role, administration and reimbursement, and increasing physician–pharmacist collaboration. PMID:27510273
Pedometer determined physical activity tracks in African American adults: The Jackson Heart Study
2012-01-01
Background This study investigated the number of pedometer assessment occasions required to establish habitual physical activity in African American adults. Methods African American adults (mean age 59.9 ± 0.60 years; 59 % female) enrolled in the Diet and Physical Activity Substudy of the Jackson Heart Study wore Yamax pedometers during 3-day monitoring periods, assessed on two to three distinct occasions, each separated by approximately one month. The stability of pedometer measured PA was described as differences in mean steps/day across time, as intraclass correlation coefficients (ICC) by sex, age, and body mass index (BMI) category, and as percent of participants changing steps/day quartiles across time. Results Valid data were obtained for 270 participants on either two or three different assessment occasions. Mean steps/day were not significantly different across assessment occasions (p values > 0.456). The overall ICCs for steps/day assessed on either two or three occasions were 0.57 and 0.76, respectively. In addition, 85 % (two assessment occasions) and 76 % (three assessment occasions) of all participants remained in the same steps/day quartile or changed one quartile over time. Conclusion The current study shows that an overall mean steps/day estimate based on a 3-day monitoring period did not differ significantly over 4 – 6 months. The findings were robust to differences in sex, age, and BMI categories. A single 3-day monitoring period is sufficient to capture habitual physical activity in African American adults. PMID:22512833
Exporters for Production of Amino Acids and Other Small Molecules.
Eggeling, Lothar
Microbes are talented catalysts to synthesize valuable small molecules in their cytosol. However, to make full use of their skills - and that of metabolic engineers - the export of intracellularly synthesized molecules to the culture medium has to be considered. This step is as essential as is each step for the synthesis of the favorite molecule of the metabolic engineer, but is frequently not taken into account. To export small molecules via the microbial cell envelope, a range of different types of carrier proteins is recognized to be involved, which are primary active carriers, secondary active carriers, or proteins increasing diffusion. Relevant export may require just one carrier as is the case with L-lysine export by Corynebacterium glutamicum or involve up to four carriers as known for L-cysteine excretion by Escherichia coli. Meanwhile carriers for a number of small molecules of biotechnological interest are recognized, like for production of peptides, nucleosides, diamines, organic acids, or biofuels. In addition to carriers involved in amino acid excretion, such carriers and their impact on product formation are described, as well as the relatedness of export carriers which may serve as a hint to identify further carriers required to improve product formation by engineering export.
Rapid non-enzymatic extraction method for isolating PCR-quality camelpox virus DNA from skin.
Yousif, A Ausama; Al-Naeem, A Abdelmohsen; Al-Ali, M Ahmad
2010-10-01
Molecular diagnostic investigations of orthopoxvirus (OPV) infections are performed using a variety of clinical samples including skin lesions, tissues from internal organs, blood and secretions. Skin samples are particularly convenient for rapid diagnosis and molecular epidemiological investigations of camelpox virus (CMLV). Classical extraction procedures and commercial spin-column-based kits are time consuming, relatively expensive, and require multiple extraction and purification steps in addition to proteinase K digestion. A rapid non-enzymatic procedure for extracting CMLV DNA from dried scabs or pox lesions was developed to overcome some of the limitations of the available DNA extraction techniques. The procedure requires as little as 10mg of tissue and produces highly purified DNA [OD(260)/OD(280) ratios between 1.47 and 1.79] with concentrations ranging from 6.5 to 16 microg/ml. The extracted CMLV DNA was proven suitable for virus-specific qualitative and, semi-quantitative PCR applications. Compared to spin-column and conventional viral DNA extraction techniques, the two-step extraction procedure saves money and time, and retains the potential for automation without compromising CMLV PCR sensitivity. Copyright (c) 2010 Elsevier B.V. All rights reserved.
ASIS v1.0: an adaptive solver for the simulation of atmospheric chemistry
NASA Astrophysics Data System (ADS)
Cariolle, Daniel; Moinat, Philippe; Teyssèdre, Hubert; Giraud, Luc; Josse, Béatrice; Lefèvre, Franck
2017-04-01
This article reports on the development and tests of the adaptive semi-implicit scheme (ASIS) solver for the simulation of atmospheric chemistry. To solve the ordinary differential equation systems associated with the time evolution of the species concentrations, ASIS adopts a one-step linearized implicit scheme with specific treatments of the Jacobian of the chemical fluxes. It conserves mass and has a time-stepping module to control the accuracy of the numerical solution. In idealized box-model simulations, ASIS gives results similar to the higher-order implicit schemes derived from the Rosenbrock's and Gear's methods and requires less computation and run time at the moderate precision required for atmospheric applications. When implemented in the MOCAGE chemical transport model and the Laboratoire de Météorologie Dynamique Mars general circulation model, the ASIS solver performs well and reveals weaknesses and limitations of the original semi-implicit solvers used by these two models. ASIS can be easily adapted to various chemical schemes and further developments are foreseen to increase its computational efficiency, and to include the computation of the concentrations of the species in aqueous-phase in addition to gas-phase chemistry.
NASA Astrophysics Data System (ADS)
Picot, Joris; Glockner, Stéphane
2018-07-01
We present an analytical study of discretization stencils for the Poisson problem and the incompressible Navier-Stokes problem when used with some direct forcing immersed boundary methods. This study uses, but is not limited to, second-order discretization and Ghost-Cell Finite-Difference methods. We show that the stencil size increases with the aspect ratio of rectangular cells, which is undesirable as it breaks assumptions of some linear system solvers. To circumvent this drawback, a modification of the Ghost-Cell Finite-Difference methods is proposed to reduce the size of the discretization stencil to the one observed for square cells, i.e. with an aspect ratio equal to one. Numerical results validate this proposed method in terms of accuracy and convergence, for the Poisson problem and both Dirichlet and Neumann boundary conditions. An improvement on error levels is also observed. In addition, we show that the application of the chosen Ghost-Cell Finite-Difference methods to the Navier-Stokes problem, discretized by a pressure-correction method, requires an additional interpolation step. This extra step is implemented and validated through well known test cases of the Navier-Stokes equations.
Leidl, R; Jacobi, E; Knab, J; Schweikert, B
2006-04-01
Economic assessment of an additional psychological intervention in the rehabilitation of patients with chronic low-back pain and evaluation of results by decision makers. Piggy-back cost-utility analysis of a randomised clinical trial, including a bootstrap analysis. Costs were measured by using the cost accounting systems of the rehabilitation clinics and by surveying patients. Health-related quality of life was measured using the EQ-5D. Implications of different representations of the decision problem and corresponding decision rules concerning the cost-effectiveness plane are discussed. As compared with the 126 patients of the control arm, the 98 patients in the intervention arm gained 3.5 days in perfect health on average as well as 1219 euro cost saving. However, because of the uncertainty involved, the results of a bootstrap analysis cover all quadrants of the cost-effectiveness plane. Using maximum willingness-to-pay per effect unit gained, decision rules can be defined for parts of the cost-effectiveness plane. These have to be aggregated in a further valuation step. Study results show that decisions on stochastic economic evaluation results may require an additional valuation step aggregating the various parts of the cost-effectiveness plane.
Measure Guideline: Buried and/or Encapsulated Ducts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shapiro, C.; Zoeller, W.; Mantha, P.
2013-08-01
Buried and/or encapsulated ducts (BEDs) are a class of advanced, energy-efficiency strategies intended to address the significant ductwork thermal losses associated with ducts installed in unconditioned attics. BEDs are ducts installed in unconditioned attics that are covered in loose-fill insulation and/or encapsulated in closed cell polyurethane spray foam insulation. This Measure Guideline covers the technical aspects of BEDs as well as the advantages, disadvantages, and risks of BEDs compared to other alternative strategies. This guideline also provides detailed guidance on installation of BEDs strategies in new and existing homes through step-by-step installation procedures. This Building America Measure Guideline synthesizes previouslymore » published research on BEDs and provides practical information to builders, contractors, homeowners, policy analysts, building professions, and building scientists. Some of the procedures presented here, however, require specialized equipment or expertise. In addition, some alterations to duct systems may require a specialized license. Persons implementing duct system improvements should not go beyond their expertise or qualifications. This guideline provides valuable information for a building industry that has struggled to address ductwork thermal losses in new and existing homes. As building codes strengthen requirements for duct air sealing and insulation, flexibility is needed to address energy efficiency goals. While ductwork in conditioned spaces has been promoted as the panacea for addressing ductwork thermal losses, BEDs installations approach - and sometimes exceed - the performance of ductwork in conditioned spaces.« less
Hajian, Reza; Mousavi, Esmat; Shams, Nafiseh
2013-06-01
Net analyte signal standard addition method has been used for the simultaneous determination of sulphadiazine and trimethoprim by spectrophotometry in some bovine milk and veterinary medicines. The method combines the advantages of standard addition method with the net analyte signal concept which enables the extraction of information concerning a certain analyte from spectra of multi-component mixtures. This method has some advantages such as the use of a full spectrum realisation, therefore it does not require calibration and prediction step and only a few measurements require for the determination. Cloud point extraction based on the phenomenon of solubilisation used for extraction of sulphadiazine and trimethoprim in bovine milk. It is based on the induction of micellar organised media by using Triton X-100 as an extraction solvent. At the optimum conditions, the norm of NAS vectors increased linearly with concentrations in the range of 1.0-150.0 μmolL(-1) for both sulphadiazine and trimethoprim. The limits of detection (LOD) for sulphadiazine and trimethoprim were 0.86 and 0.92 μmolL(-1), respectively. Copyright © 2012 Elsevier Ltd. All rights reserved.
DATA QUALITY OBJECTIVES FOR SELECTING WASTE SAMPLES FOR BENCH-SCALE REFORMER TREATABILITY STUDIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
BANNING DL
2011-02-11
This document describes the data quality objectives to select archived samples located at the 222-S Laboratory for Bench-Scale Reforming testing. The type, quantity, and quality of the data required to select the samples for Fluid Bed Steam Reformer testing are discussed. In order to maximize the efficiency and minimize the time to treat Hanford tank waste in the Waste Treatment and Immobilization Plant, additional treatment processes may be required. One of the potential treatment processes is the fluidized bed steam reformer. A determination of the adequacy of the fluidized bed steam reformer process to treat Hanford tank waste is required.more » The initial step in determining the adequacy of the fluidized bed steam reformer process is to select archived waste samples from the 222-S Laboratory that will be used in a bench scale tests. Analyses of the selected samples will be required to confirm the samples meet the shipping requirements and for comparison to the bench scale reformer (BSR) test sample selection requirements.« less
Coupling in the absence of tertiary amines.
Bodanszky, M; Bednarek, M A; Bodanszky, A
1982-10-01
In order to avoid base catalyzed side reactions during coupling, attempts were made to render superfluous the addition of tertiary amines to the reaction mixture. Weak acids were applied for the removal of acid labile protecting groups. Acetic acid and other carboxylic acids were considered unsuitable for this purpose coupling step. Pentachlorophenol and 2,4-dinitrophenol cleaved the Bpoc, Nps and Trt groups but more practical rates were reached with solutions of 1-hydroxybenzotriazole (HOBt) in trifluoroethanol, in acetic acid, or in a mixture of phenol and p-cresol. In addition to acidolysis, HOBt salts of amino components could also be obtained through hydrogenolysis of the Z group or thiolysis of the Nps group in the presence of HOBt, or by the displacement of acetic acid from acetate salts with HOBt. Acylation of HOBt salts of amino components with symmetrical or mixed anhydrides or with active esters did not require the addition of tertiary amine.
Using cognitive task analysis to create a teaching protocol for bovine dystocia.
Read, Emma K; Baillie, Sarah
2013-01-01
When learning skilled techniques and procedures, students face many challenges. Learning is easier when detailed instructions are available, but experts often find it difficult to articulate all of the steps involved in a task or relate to the learner as a novice. This problem is further compounded when the technique is internal and unsighted (e.g., obstetrical procedures). Using expert bovine practitioners and a life-size model cow and calf, the steps and decision making involved in performing correction of two different dystocia presentations (anterior leg back and breech) were deconstructed using cognitive task analysis (CTA). Video cameras were positioned to capture movement inside and outside the cow model while the experts were asked to first perform the technique as they would in a real situation and then perform the procedure again as if articulating the steps to a novice learner. The audio segments were transcribed and, together with the video components, analyzed to create a list of steps for each expert. Consensus was achieved between experts during individual interviews followed by a group discussion. A "gold standard" list or teaching protocol was created for each malpresentation. CTA was useful in defining the technical and cognitive steps required to both perform and teach the tasks effectively. Differences between experts highlight the need for consensus before teaching the skill. In addition, the study identified several different, yet effective, techniques and provided information that could allow experts to consider other approaches they might use when their own technique fails.
Bahira, Meriem; McCauley, Micah J; Almaqwashi, Ali A; Lincoln, Per; Westerlund, Fredrik; Rouzina, Ioulia; Williams, Mark C
2015-10-15
Several multi-component DNA intercalating small molecules have been designed around ruthenium-based intercalating monomers to optimize DNA binding properties for therapeutic use. Here we probe the DNA binding ligand [μ-C4(cpdppz)2(phen)4Ru2](4+), which consists of two Ru(phen)2dppz(2+) moieties joined by a flexible linker. To quantify ligand binding, double-stranded DNA is stretched with optical tweezers and exposed to ligand under constant applied force. In contrast to other bis-intercalators, we find that ligand association is described by a two-step process, which consists of fast bimolecular intercalation of the first dppz moiety followed by ∼10-fold slower intercalation of the second dppz moiety. The second step is rate-limited by the requirement for a DNA-ligand conformational change that allows the flexible linker to pass through the DNA duplex. Based on our measured force-dependent binding rates and ligand-induced DNA elongation measurements, we are able to map out the energy landscape and structural dynamics for both ligand binding steps. In addition, we find that at zero force the overall binding process involves fast association (∼10 s), slow dissociation (∼300 s), and very high affinity (Kd ∼10 nM). The methodology developed in this work will be useful for studying the mechanism of DNA binding by other multi-step intercalating ligands and proteins. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wynne, Adam S.
2011-05-05
In many application domains in science and engineering, data produced by sensors, instruments and networks is naturally processed by software applications structured as a pipeline . Pipelines comprise a sequence of software components that progressively process discrete units of data to produce a desired outcome. For example, in a Web crawler that is extracting semantics from text on Web sites, the first stage in the pipeline might be to remove all HTML tags to leave only the raw text of the document. The second step may parse the raw text to break it down into its constituent grammatical parts, suchmore » as nouns, verbs and so on. Subsequent steps may look for names of people or places, interesting events or times so documents can be sequenced on a time line. Each of these steps can be written as a specialized program that works in isolation with other steps in the pipeline. In many applications, simple linear software pipelines are sufficient. However, more complex applications require topologies that contain forks and joins, creating pipelines comprising branches where parallel execution is desirable. It is also increasingly common for pipelines to process very large files or high volume data streams which impose end-to-end performance constraints. Additionally, processes in a pipeline may have specific execution requirements and hence need to be distributed as services across a heterogeneous computing and data management infrastructure. From a software engineering perspective, these more complex pipelines become problematic to implement. While simple linear pipelines can be built using minimal infrastructure such as scripting languages, complex topologies and large, high volume data processing requires suitable abstractions, run-time infrastructures and development tools to construct pipelines with the desired qualities-of-service and flexibility to evolve to handle new requirements. The above summarizes the reasons we created the MeDICi Integration Framework (MIF) that is designed for creating high-performance, scalable and modifiable software pipelines. MIF exploits a low friction, robust, open source middleware platform and extends it with component and service-based programmatic interfaces that make implementing complex pipelines simple. The MIF run-time automatically handles queues between pipeline elements in order to handle request bursts, and automatically executes multiple instances of pipeline elements to increase pipeline throughput. Distributed pipeline elements are supported using a range of configurable communications protocols, and the MIF interfaces provide efficient mechanisms for moving data directly between two distributed pipeline elements.« less
29 CFR 1952.232 - Completion of developmental steps and certification.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 9 2010-07-01 2010-07-01 false Completion of developmental steps and certification. 1952... § 1952.232 Completion of developmental steps and certification. (a) In accordance with the requirements... plan received certification, effective February 8, 1980, as having completed all developmental steps...
Factors influencing the commercialisation of cloning in the pork industry.
Pratt, S L; Sherrer, E S; Reeves, D E; Stice, S L
2006-01-01
Production of cloned pigs using somatic cell nuclear transfer (SCNT) is a repeatable and predictable procedure and multiple labs around the world have generated cloned pigs and genetically modified cloned pigs. Due to the integrated nature of the pork production industry, pork producers are the most likely to benefit and are in the best position to introduce cloning in to production systems. Cloning can be used to amplify superior genetics or be used in conjunction with genetic modifications to produce animals with superior economic traits. Though unproven, cloning could add value by reducing pig-to-pig variability in economically significant traits such as growth rate, feed efficiency, and carcass characteristics. However, cloning efficiencies using SCNT are low, but predictable. The inefficiencies are due to the intrusive nature of the procedure, the quality of oocytes and/or the somatic cells used in the procedure, the quality of the nuclear transfer embryos transferred into recipients, pregnancy rates of the recipients, and neonatal survival of the clones. Furthermore, in commercial animal agriculture, clones produced must be able to grow and thrive under normal management conditions, which include attainment of puberty and subsequent capability to reproduce. To integrate SCNT into the pork industry, inefficiencies at each step of the procedure must be overcome. In addition, it is likely that non-surgical embryo transfer will be required to deliver cloned embryos, and/or additional methods to generate high health clones will need to be developed. This review will focus on the state-of-the-art for SCNT in pigs and the steps required for practical implementation of pig cloning in animal agriculture.
Decker, Leslie M; Cignetti, Fabien; Hunt, Nathaniel; Potter, Jane F; Stergiou, Nicholas; Studenski, Stephanie A
2016-08-01
A U-shaped relationship between cognitive demand and gait control may exist in dual-task situations, reflecting opposing effects of external focus of attention and attentional resource competition. The purpose of the study was twofold: to examine whether gait control, as evaluated from step-to-step variability, is related to cognitive task difficulty in a U-shaped manner and to determine whether age modifies this relationship. Young and older adults walked on a treadmill without attentional requirement and while performing a dichotic listening task under three attention conditions: non-forced (NF), forced-right (FR), and forced-left (FL). The conditions increased in their attentional demand and requirement for inhibitory control. Gait control was evaluated by the variability of step parameters related to balance control (step width) and rhythmic stepping pattern (step length and step time). A U-shaped relationship was found for step width variability in both young and older adults and for step time variability in older adults only. Cognitive performance during dual tasking was maintained in both young and older adults. The U-shaped relationship, which presumably results from a trade-off between an external focus of attention and competition for attentional resources, implies that higher-level cognitive processes are involved in walking in young and older adults. Specifically, while these processes are initially involved only in the control of (lateral) balance during gait, they become necessary for the control of (fore-aft) rhythmic stepping pattern in older adults, suggesting that attentional resources turn out to be needed in all facets of walking with aging. Finally, despite the cognitive resources required by walking, both young and older adults spontaneously adopted a "posture second" strategy, prioritizing the cognitive task over the gait task.
Accessing FMS Functionality: The Impact of Design on Learning
NASA Technical Reports Server (NTRS)
Fennell, Karl; Sherry, Lance; Roberts, Ralph, Jr.
2004-01-01
In modern commercial and military aircraft, the Flight Management System (FMS) lies at the heart of the functionality of the airplane. The nature of the FMS has also caused great difficulties learning and accessing this functionality. This study examines actual Air Force pilots who were qualified on the newly introduced advanced FMS and shows that the design of the system itself is a primary source of difficulty learning the system. Twenty representative tasks were selected which the pilots could be expected to accomplish on an ' actual flight. These tasks were analyzed using the RAFIV stage model (Sherry, Polson, et al. 2002). This analysis demonstrates that a great burden is placed on remembering complex reformulation of the task to function mapping. 65% of the tasks required retaining one access steps in memory to accomplish the task, 20% required two memorized access steps, and 15% required zero memorized access steps. The probability that a participant would make an access error on the tasks was: two memorized access steps - 74%, one memorized access step - 13%, and zero memorized access steps - 6%. Other factors were analyzed as well, including experience with the system and frequency of use. This completed the picture of a system with many memorized steps causing difficulty with the new system, especially when trying to fine where to access the correct function.
Teaching Plate Tectonic Concepts using GeoMapApp Learning Activities
NASA Astrophysics Data System (ADS)
Goodwillie, A. M.; Kluge, S.
2012-12-01
GeoMapApp Learning Activities ( http://serc.carleton.edu/geomapapp/collection.html ) can help educators to expose undergraduate students to a range of earth science concepts using high-quality data sets in an easy-to-use map-based interface called GeoMapApp. GeoMapApp Learning Activities require students to interact with and analyse research-quality geoscience data as a means to explore and enhance their understanding of underlying content and concepts. Each activity is freely available through the SERC-Carleton web site and offers step-by-step student instructions and answer sheets. Also provided are annotated educator versions of the worksheets that include teaching tips, additional content and suggestions for further work. The activities can be used "off-the-shelf". Or, since the educator may require flexibility to tailor the activities, the documents are provided in Word format for easy modification. Examples of activities include one on the concept of seafloor spreading that requires students to analyse global seafloor crustal age data to calculate spreading rates in different ocean basins. Another activity has students explore hot spots using radiometric age dating of rocks along the Hawaiian-Emperor seamount chain. A third focusses upon the interactive use of contours and profiles to help students visualise 3-D topography on 2-D computer screens. A fourth activity provides a study of mass wasting as revealed through geomorphological evidence. The step-by-step instructions and guided inquiry approach reduce the need for teacher intervention whilst boosting the time that students can spend on productive exploration and learning. The activities can be used, for example, in a classroom lab with the educator present and as self-paced assignments in an out-of-class setting. GeoMapApp Learning Activities are funded through the NSF GeoEd program and are aimed at students in the introductory undergraduate, community college and high school levels. The activities are based upon GeoMapApp (http://www.geomapapp.org), a free map-based data exploration and visualisation tool that allows students to access a wide range of geoscience data in a virtual lab-like environment.
29 CFR 1952.112 - Completion of developmental steps and certification.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 9 2010-07-01 2010-07-01 false Completion of developmental steps and certification. 1952... § 1952.112 Completion of developmental steps and certification. (a) In accordance with the requirements... of publication on November 19, 1976, as having completed all developmental steps specified in the...
A Coordinated Initialization Process for the Distributed Space Exploration Simulation
NASA Technical Reports Server (NTRS)
Crues, Edwin Z.; Phillips, Robert G.; Dexter, Dan; Hasan, David
2007-01-01
A viewgraph presentation on the federate initialization process for the Distributed Space Exploration Simulation (DSES) is described. The topics include: 1) Background: DSES; 2) Simulation requirements; 3) Nine Step Initialization; 4) Step 1: Create the Federation; 5) Step 2: Publish and Subscribe; 6) Step 3: Create Object Instances; 7) Step 4: Confirm All Federates Have Joined; 8) Step 5: Achieve initialize Synchronization Point; 9) Step 6: Update Object Instances With Initial Data; 10) Step 7: Wait for Object Reflections; 11) Step 8: Set Up Time Management; 12) Step 9: Achieve startup Synchronization Point; and 13) Conclusions
A Four-step Approach for Evaluation of Dose Additivity
A four step approach was developed for evaluating toxicity data on a chemical mixture for consistency with dose addition. Following the concepts in the U.S. EPA mixture guidance (EPA 2000), toxicologic interaction for a defined mixture (all components known) is departure from a c...
Rep. Cassidy, Bill [R-LA-6
2014-09-11
House - 09/11/2014 Referred to the Committee on Ethics, and in addition to the Committee on House Administration, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
NASA Technical Reports Server (NTRS)
Maddalon, J. M.; Hayhurst, K. J.; Neogi, N. A.; Verstynen, H. A.; Clothier, R. A.
2016-01-01
One of the key challenges to the development of a commercial Unmanned Air-craft System (UAS) market is the lack of explicit consideration of UAS in the current regulatory framework. Despite recent progress, additional steps are needed to enable broad UAS types and operational models. This paper discusses recent research that examines how a risk-based approach for safety might change the process and substance of airworthiness requirements for UAS. The project proposed risk-centric airworthiness requirements for a midsize un-manned rotorcraft used for agricultural spraying and also identified factors that may contribute to distinguishing safety risk among different UAS types and operational concepts. Lessons learned regarding how a risk-based approach can expand the envelope of UAS certification are discussed.
Flexible Power Distribution Based on Point of Load Converters
NASA Astrophysics Data System (ADS)
Dhallewin, G.; Galiana, D.; Mollard, J. M.; Schaper, W.; Strixner, E.; Tonicello, F.; Triggianese, M.
2014-08-01
Present digital electronic loads require low voltages and suffer from high currents. In addition, they need several different voltage levels to supply the different parts of digital devices like the core, the input/output I/F, etc. Distributed Power Architectures (DPA) with point-of- load (POL) converters (synchronous buck type) offer excellent performance in term of efficiency and load step behaviour. They occupy little PCB area and are well suited for very low voltage (VLV) DC conversion (1V to 3.3V). The paper presents approaches to architectural design of POL based supplies including redundancy and protection as well as the requirements on a European hardware implementation. The main driver of the analysis is the flexibility of each element (DC/DC converter, protection, POL core) to cover a wide range of space applications.
Mohamed, E E H Hussein
2003-06-01
According to the CT and MRI appearances, 39 chronic subdural haematoma (CSDH) patients were suspected of having solid clots and/or a high likelihood of loculation. Craniotomy was planned from the start. Beside the better exposure, excision of the dura and outer membrane, assumed to be the source of haematoma fluid, this is an additional step to minimize the incidence of significant recollection. There were no additional operative or postoperative cranial and/or systemic complications when compared with other minor procedures. Two patients (5%) required once percutaneous tapping and aspiration. Accordingly, if a case is considered to be better managed with craniotomy, durectomy and outer membranectomy this is an easy and safe technique with minimal incidence of recollection, morbidity and mortality.
Coherent concepts are computed in the anterior temporal lobes.
Lambon Ralph, Matthew A; Sage, Karen; Jones, Roy W; Mayberry, Emily J
2010-02-09
In his Philosophical Investigations, Wittgenstein famously noted that the formation of semantic representations requires more than a simple combination of verbal and nonverbal features to generate conceptually based similarities and differences. Classical and contemporary neuroscience has tended to focus upon how different neocortical regions contribute to conceptualization through the summation of modality-specific information. The additional yet critical step of computing coherent concepts has received little attention. Some computational models of semantic memory are able to generate such concepts by the addition of modality-invariant information coded in a multidimensional semantic space. By studying patients with semantic dementia, we demonstrate that this aspect of semantic memory becomes compromised following atrophy of the anterior temporal lobes and, as a result, the patients become increasingly influenced by superficial rather than conceptual similarities.
Chemical carcinogens and inhibitors of carcinogenesis in the human diet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carr, B.I.
1985-01-01
The induction of cancer by chemicals as presently understood involves a series of steps, some of which require the passage of time. Many substances that are potent carcinogens in experimental animals are known to exist in nature and occur as part of the human diet. In addition, many of the substances that are known to inhibit experimental carcinogenesis also exist in the human diet. Thus, in addition to industrially produced carcinogens, humans can be presumed to have evolved in an environment that contains both carcinogens and anti-carcinogens. There is also a great deal of experimental and human epidemiologic data onmore » the influence of lipids, proteins and carbohydrates on cancer incidence rates; however, much of those data are confusing and conflicting.« less
Hughes, Douglas A.
2006-04-04
A method and system are provided for determining the torque required to launch a vehicle having a hybrid drive-train that includes at least two independently operable prime movers. The method includes the steps of determining the value of at least one control parameter indicative of a vehicle operating condition, determining the torque required to launch the vehicle from the at least one determined control parameter, comparing the torque available from the prime movers to the torque required to launch the vehicle, and controlling operation of the prime movers to launch the vehicle in response to the comparing step. The system of the present invention includes a control unit configured to perform the steps of the method outlined above.
Balancing Chemical Equations: The Role of Developmental Level and Mental Capacity.
ERIC Educational Resources Information Center
Niaz, Mansoor; Lawson, Anton E.
1985-01-01
Tested two hypotheses: (1) formal reasoning is required to balance simple one-step equations; and (2) formal reasoning plus sufficient mental capacity are required to balance many-step equations. Independent variables included intellectual development, mental capacity, and degree of field dependence/independence. With 25 subjects, significance was…
NASA Technical Reports Server (NTRS)
Cotton, William B.; Hilb, Robert; Koczo, Stefan, Jr.; Wing, David J.
2016-01-01
A set of five developmental steps building from the NASA TASAR (Traffic Aware Strategic Aircrew Requests) concept are described, each providing incrementally more efficiency and capacity benefits to airspace system users and service providers, culminating in a Full Airborne Trajectory Management capability. For each of these steps, the incremental Operational Hazards and Safety Requirements are identified for later use in future formal safety assessments intended to lead to certification and operational approval of the equipment and the associated procedures. Two established safety assessment methodologies that are compliant with the FAA's Safety Management System were used leading to Failure Effects Classifications (FEC) for each of the steps. The most likely FEC for the first three steps, Basic TASAR, Digital TASAR, and 4D TASAR, is "No effect". For step four, Strategic Airborne Trajectory Management, the likely FEC is "Minor". For Full Airborne Trajectory Management (Step 5), the most likely FEC is "Major".
A quick response four decade logarithmic high-voltage stepping supply
NASA Technical Reports Server (NTRS)
Doong, H.
1978-01-01
An improved high-voltage stepping supply, for space instrumentation is described where low power consumption and fast settling time between steps are required. The high-voltage stepping supply, utilizing an average power of 750 milliwatts, delivers a pair of mirror images with 64 level logarithmic outputs. It covers a four decade range of + or - 2500 to + or - 0.29 volts having an output stability of + or - 0.5 percent or + or - 20 millivolts for all line load and temperature variations. The supply provides a typical step setting time of 1 millisecond with 100 microseconds for the lower two decades. The versatile design features of the high-voltage stepping supply provides a quick response staircase generator as described or a fixed voltage with the option to change levels as required over large dynamic ranges without circuit modifications. The concept can be implemented up to + or - 5000 volts. With these design features, the high-voltage stepping supply should find numerous applications where charged particle detection, electro-optical systems, and high voltage scientific instruments are used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wen, Haiming; Lin, Yaojun; Seidman, David N.
The preparation of transmission electron microcopy (TEM) samples from powders with particle sizes larger than ~100 nm poses a challenge. The existing methods are complicated and expensive, or have a low probability of success. Herein, we report a modified methodology for preparation of TEM samples from powders, which is efficient, cost-effective, and easy to perform. This method involves mixing powders with an epoxy on a piece of weighing paper, curing the powder–epoxy mixture to form a bulk material, grinding the bulk to obtain a thin foil, punching TEM discs from the foil, dimpling the discs, and ion milling the dimpledmore » discs to electron transparency. Compared with the well established and robust grinding–dimpling–ion-milling method for TEM sample preparation for bulk materials, our modified approach for preparing TEM samples from powders only requires two additional simple steps. In this article, step-by-step procedures for our methodology are described in detail, and important strategies to ensure success are elucidated. Furthermore, our methodology has been applied successfully for preparing TEM samples with large thin areas and high quality for many different mechanically milled metallic powders.« less
Continuous-Flow Synthesis of N-Succinimidyl 4-[18F]fluorobenzoate Using a Single Microfluidic Chip
Kimura, Hiroyuki; Tomatsu, Kenji; Saiki, Hidekazu; Arimitsu, Kenji; Ono, Masahiro; Kawashima, Hidekazu; Iwata, Ren; Nakanishi, Hiroaki; Ozeki, Eiichi; Kuge, Yuji; Saji, Hideo
2016-01-01
In the field of positron emission tomography (PET) radiochemistry, compact microreactors provide reliable and reproducible synthesis methods that reduce the use of expensive precursors for radiolabeling and make effective use of the limited space in a hot cell. To develop more compact microreactors for radiosynthesis of 18F-labeled compounds required for the multistep procedure, we attempted radiosynthesis of N-succinimidyl 4-[18F]fluorobenzoate ([18F]SFB) via a three-step procedure using a microreactor. We examined individual steps for [18F]SFB using a batch reactor and microreactor and developed a new continuous-flow synthetic method with a single microfluidic chip to achieve rapid and efficient radiosynthesis of [18F]SFB. In the synthesis of [18F]SFB using this continuous-flow method, the three-step reaction was successfully completed within 6.5 min and the radiochemical yield was 64 ± 2% (n = 5). In addition, it was shown that the quality of [18F]SFB synthesized on this method was equal to that synthesized by conventional methods using a batch reactor in the radiolabeling of bovine serum albumin with [18F]SFB. PMID:27410684
Inverse imaging of the breast with a material classification technique.
Manry, C W; Broschat, S L
1998-03-01
In recent publications [Chew et al., IEEE Trans. Blomed. Eng. BME-9, 218-225 (1990); Borup et al., Ultrason. Imaging 14, 69-85 (1992)] the inverse imaging problem has been solved by means of a two-step iterative method. In this paper, a third step is introduced for ultrasound imaging of the breast. In this step, which is based on statistical pattern recognition, classification of tissue types and a priori knowledge of the anatomy of the breast are integrated into the iterative method. Use of this material classification technique results in more rapid convergence to the inverse solution--approximately 40% fewer iterations are required--as well as greater accuracy. In addition, tumors are detected early in the reconstruction process. Results for reconstructions of a simple two-dimensional model of the human breast are presented. These reconstructions are extremely accurate when system noise and variations in tissue parameters are not too great. However, for the algorithm used, degradation of the reconstructions and divergence from the correct solution occur when system noise and variations in parameters exceed threshold values. Even in this case, however, tumors are still identified within a few iterations.
Wen, Haiming; Lin, Yaojun; Seidman, David N.; ...
2015-09-09
The preparation of transmission electron microcopy (TEM) samples from powders with particle sizes larger than ~100 nm poses a challenge. The existing methods are complicated and expensive, or have a low probability of success. Herein, we report a modified methodology for preparation of TEM samples from powders, which is efficient, cost-effective, and easy to perform. This method involves mixing powders with an epoxy on a piece of weighing paper, curing the powder–epoxy mixture to form a bulk material, grinding the bulk to obtain a thin foil, punching TEM discs from the foil, dimpling the discs, and ion milling the dimpledmore » discs to electron transparency. Compared with the well established and robust grinding–dimpling–ion-milling method for TEM sample preparation for bulk materials, our modified approach for preparing TEM samples from powders only requires two additional simple steps. In this article, step-by-step procedures for our methodology are described in detail, and important strategies to ensure success are elucidated. Furthermore, our methodology has been applied successfully for preparing TEM samples with large thin areas and high quality for many different mechanically milled metallic powders.« less
Roch, Samuel; Brinker, Alexander
2017-04-18
The rising evidence of microplastic pollution impacts on aquatic organisms in both marine and freshwater ecosystems highlights a pressing need for adequate and comparable detection methods. Available tissue digestion protocols are time-consuming (>10 h) and/or require several procedural steps, during which materials can be lost and contaminants introduced. This novel approach comprises an accelerated digestion step using sodium hydroxide and nitric acid in combination to digest all organic material within 1 h plus an additional separation step using sodium iodide which can be used to reduce mineral residues in samples where necessary. This method yielded a microplastic recovery rate of ≥95%, and all tested polymer types were recovered with only minor changes in weight, size, and color with the exception of polyamide. The method was also shown to be effective on field samples from two benthic freshwater fish species, revealing a microplastic burden comparable to that indicated in the literature. As a consequence, the present method saves time, minimizes the loss of material and the risk of contamination, and facilitates the identification of plastic particles and fibers, thus providing an efficient method to detect and quantify microplastics in the gastrointestinal tract of fishes.
Color Addition and Subtraction Apps
NASA Astrophysics Data System (ADS)
Ruiz, Frances; Ruiz, Michael J.
2015-10-01
Color addition and subtraction apps in HTML5 have been developed for students as an online hands-on experience so that they can more easily master principles introduced through traditional classroom demonstrations. The evolution of the additive RGB color model is traced through the early IBM color adapters so that students can proceed step by step in understanding mathematical representations of RGB color. Finally, color addition and subtraction are presented for the X11 colors from web design to illustrate yet another real-life application of color mixing.
A Four Step Approach to Evaluate Mixtures for Consistency with Dose Addition
We developed a four step approach for evaluating chemical mixture data for consistency with dose addition for use in environmental health risk assessment. Following the concepts in the U.S. EPA mixture risk guidance (EPA 2000a,b), toxicological interaction for a defined mixture (...
Gaze Fluctuations Are Not Additively Decomposable: Reply to Bogartz and Staub
ERIC Educational Resources Information Center
Kelty-Stephen, Damian G.; Mirman, Daniel
2013-01-01
Our previous work interpreted single-lognormal fits to inter-gaze distance (i.e., "gaze steps") histograms as evidence of multiplicativity and hence interactions across scales in visual cognition. Bogartz and Staub (2012) proposed that gaze steps are additively decomposable into fixations and saccades, matching the histograms better and…
Continuous track paths reveal additive evidence integration in multistep decision making.
Buc Calderon, Cristian; Dewulf, Myrtille; Gevers, Wim; Verguts, Tom
2017-10-03
Multistep decision making pervades daily life, but its underlying mechanisms remain obscure. We distinguish four prominent models of multistep decision making, namely serial stage, hierarchical evidence integration, hierarchical leaky competing accumulation (HLCA), and probabilistic evidence integration (PEI). To empirically disentangle these models, we design a two-step reward-based decision paradigm and implement it in a reaching task experiment. In a first step, participants choose between two potential upcoming choices, each associated with two rewards. In a second step, participants choose between the two rewards selected in the first step. Strikingly, as predicted by the HLCA and PEI models, the first-step decision dynamics were initially biased toward the choice representing the highest sum/mean before being redirected toward the choice representing the maximal reward (i.e., initial dip). Only HLCA and PEI predicted this initial dip, suggesting that first-step decision dynamics depend on additive integration of competing second-step choices. Our data suggest that potential future outcomes are progressively unraveled during multistep decision making.
Cholesterol effectively blocks entry of flavivirus.
Lee, Chyan-Jang; Lin, Hui-Ru; Liao, Ching-Len; Lin, Yi-Ling
2008-07-01
Japanese encephalitis virus (JEV) and dengue virus serotype 2 (DEN-2) are enveloped flaviviruses that enter cells through receptor-mediated endocytosis and low pH-triggered membrane fusion and then replicate in intracellular membrane structures. Lipid rafts, cholesterol-enriched lipid-ordered membrane domains, are platforms for a variety of cellular functions. In this study, we found that disruption of lipid raft formation by cholesterol depletion with methyl-beta-cyclodextrin or cholesterol chelation with filipin III reduces JEV and DEN-2 infection, mainly at the intracellular replication steps and, to a lesser extent, at viral entry. Using a membrane flotation assay, we found that several flaviviral nonstructural proteins are associated with detergent-resistant membrane structures, indicating that the replication complex of JEV and DEN-2 localizes to the membranes that possess the lipid raft property. Interestingly, we also found that addition of cholesterol readily blocks flaviviral infection, a result that contrasts with previous reports of other viruses, such as Sindbis virus, whose infectivity is enhanced by cholesterol. Cholesterol mainly affected the early step of the flavivirus life cycle, because the presence of cholesterol during viral adsorption greatly blocked JEV and DEN-2 infectivity. Flavirial entry, probably at fusion and RNA uncoating steps, was hindered by cholesterol. Our results thus suggest a stringent requirement for membrane components, especially with respect to the amount of cholesterol, in various steps of the flavivirus life cycle.
40 CFR 35.925-8 - Environmental review.
Code of Federal Regulations, 2014 CFR
2014-07-01
... impacts, consistent with the requirements of part 6 of this chapter, as part of facilities planning, in accordance with § 35.917-1(d)(7). The Regional Administrator must insure that an environmental impact... award of step 2 or step 3 grant assistance. (b) The Regional Administrator may not award step 2 or step...
40 CFR 35.925-8 - Environmental review.
Code of Federal Regulations, 2012 CFR
2012-07-01
... impacts, consistent with the requirements of part 6 of this chapter, as part of facilities planning, in accordance with § 35.917-1(d)(7). The Regional Administrator must insure that an environmental impact... award of step 2 or step 3 grant assistance. (b) The Regional Administrator may not award step 2 or step...
29 CFR 1952.162 - Completion of developmental steps and certification.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 9 2010-07-01 2010-07-01 false Completion of developmental steps and certification. 1952... § 1952.162 Completion of developmental steps and certification. (a) In accordance with the requirements... certified on September 14, 1976 as having completed all developmental steps in its plan with regard to those...
van der Molen, Thys; van Boven, Job F M; Maguire, Terence; Goyal, Pankaj; Altman, Pablo
2017-01-01
The aim of this paper was to propose key steps for community pharmacist integration into a patient care pathway for chronic obstructive pulmonary disease (COPD) management. A literature search was conducted to identify publications focusing on the role of the community pharmacist in identification and management of COPD. The literature search highlighted evidence supporting an important role for pharmacists at each of the four key steps in the patient care pathway for COPD management. Step 1 (primary prevention): pharmacists are ideally placed to provide information on disease awareness and risk prevention campaigns, and to encourage lifestyle interventions, including smoking cessation. Step 2 (early detection/case finding): pharmacists are often the first point of contact between the patient and the healthcare system and can therefore play an important role in the early identification of patients with COPD. Step 3 (management and ongoing support): pharmacists can assist patients by providing advice and education on dosage, inhaler technique, treatment expectations and the importance of adherence, and by supporting self-management, including recognition and treatment of COPD exacerbations. Step 4 (review and follow-up): pharmacists can play an important role in monitoring adherence and ongoing inhaler technique in patients with COPD. In summary, pharmacists are ideally positioned to play a vital role in all key stages of an integrated COPD patient care pathway from early disease detection to the support of management plans, including advice and counselling regarding medications, inhaler technique and treatment adherence. Areas requiring additional consideration include pharmacist training, increasing awareness of the pharmacist role, administration and reimbursement, and increasing physician-pharmacist collaboration. © 2016 The Authors. British Journal of Clinical Pharmacology published by John Wiley & Sons Ltd on behalf of British Pharmacological Society.
Coultrap, Steven J.; Browning, Michael D.; Proctor, William R.
2011-01-01
The hippocampal N-methyl-d-aspartate receptor (NMDAR) activity plays important roles in cognition and is a major substrate for ethanol-induced memory dysfunction. This receptor is a glutamate-gated ion channel, which is composed of NR1 and NR2 subunits in various brain areas. Although homomeric NR1 subunits form an active ion channel that conducts Na+ and Ca2+ currents, the incorporation of NR2 subunits allows this channel to be modulated by the Src family of kinases, phosphatases, and by simple molecules such as ethanol. We have found that short-term ethanol application inhibits the NMDAR activity via striatal enriched protein tyrosine phosphatase (STEP)-regulated mechanisms. The genetic deletion of the active form of STEP, STEP61, leads to marked attenuation of ethanol inhibition of NMDAR currents. In addition, STEP61 negatively regulates Fyn and p38 mitogen-activated protein kinase (MAPK), and these proteins are members of the NMDAR super molecular complex. Here we demonstrate, using whole-cell electrophysiological recording, Western blot analysis, and pharmacological manipulations, that neurons exposed to a 3-h, 45 mM ethanol treatment develop an adaptive attenuation of short-term ethanol inhibition of NMDAR currents in brain slices. Our results suggest that this adaptation of NMDAR responses is associated with a partial inactivation of STEP61, an activation of p38 MAPK, and a requirement for NR2B activity. Together, these data indicate that altered STEP61 and p38 MAPK signaling contribute to the modulation of ethanol inhibition of NMDARs in brain neurons. PMID:21680777
Norris, Michelle; Anderson, Ross; Motl, Robert W; Hayes, Sara; Coote, Susan
2017-03-01
The purpose of this study was to examine the minimum number of days needed to reliably estimate daily step count and energy expenditure (EE), in people with multiple sclerosis (MS) who walked unaided. Seven days of activity monitor data were collected for 26 participants with MS (age=44.5±11.9years; time since diagnosis=6.5±6.2years; Patient Determined Disease Steps=≤3). Mean daily step count and mean daily EE (kcal) were calculated for all combinations of days (127 combinations), and compared to the respective 7-day mean daily step count or mean daily EE using intra-class correlations (ICC), the Generalizability Theory and Bland-Altman. For step count, ICC values of 0.94-0.98 and a G-coefficient of 0.81 indicate a minimum of any random 2-day combination is required to reliably calculate mean daily step count. For EE, ICC values of 0.96-0.99 and a G-coefficient of 0.83 indicate a minimum of any random 4-day combination is required to reliably calculate mean daily EE. For Bland-Altman analyses all combinations of days, bar single day combinations, resulted in a mean bias within ±10%, when expressed as a percentage of the 7-day mean daily step count or mean daily EE. A minimum of 2days for step count and 4days for EE, regardless of day type, is needed to reliably estimate daily step count and daily EE, in people with MS who walk unaided. Copyright © 2017 Elsevier B.V. All rights reserved.
A Numerical Model for Wind-Wave Prediction in Deep Water.
1983-01-01
amounts of gage data are available. Additionally, if all steps are modeled correctly, factors such as direction and angular spreading, which are not...spherical orthogonal system if large oceanic areas are to be modeled. The wave model requires a rect- angular grid and wind input at each of the...RM22CNFREQ+1)u1. DO 70 Im1,NFREG 70 SINF(I)uTWOPI*690/(TWOPIIFF(l))3S5 C DO 17 ItJ,100 VST =O,4851.4$IU USTwVST 19 ZOaCl/UST+C2*UST$UST-C3 UST1= VST /ALOG
SEM evaluation of metallization on semiconductors. [Scanning Electron Microscope
NASA Technical Reports Server (NTRS)
Fresh, D. L.; Adolphsen, J. W.
1974-01-01
A test method for the evaluation of metallization on semiconductors is presented and discussed. The method has been prepared in MIL-STD format for submittal as a proposed addition to MIL-STD-883. It is applicable to discrete devices and to integrated circuits and specifically addresses batch-process oriented defects. Quantitative accept/reject criteria are given for contact windows, other oxide steps, and general interconnecting metallization. Figures are provided that illustrate typical types of defects. Apparatus specifications, sampling plans, and specimen preparation and examination requirements are described. Procedures for glassivated devices and for multi-metal interconnection systems are included.
Engine out of the Chassis: Cell-Free Protein Synthesis and its Uses
Rosenblum, Gabriel; Cooperman, Barry S.
2013-01-01
The translation machinery is the engine of life. Extracting the cytoplasmic milieu from a cell affords a lysate capable of producing proteins in concentrations reaching tens of micromolar. Such lysates, derivable from a variety of cells, allow the facile addition and subtraction of components that are directly or indirectly related to the translation machinery and/or the over-expressed protein. The flexible nature of such cell-free expression systems, when coupled with high throughput monitoring, can be especially suitable for protein engineering studies, allowing one to bypass multiple steps typically required using conventional in vivo protein expression. PMID:24161673
Silicon-on-insulator polarization splitting and rotating device for polarization diversity circuits.
Liu, Liu; Ding, Yunhong; Yvind, Kresten; Hvam, Jørn M
2011-06-20
A compact and efficient polarization splitting and rotating device built on the silicon-on-insulator platform is introduced, which can be readily used for the interface section of a polarization diversity circuit. The device is compact, with a total length of a few tens of microns. It is also simple, consisting of only two parallel silicon-on-insulator wire waveguides with different widths, and thus requiring no additional and nonstandard fabrication steps. A total insertion loss of -0.6 dB and an extinction ratio of 12 dB have been obtained experimentally in the whole C-band.
[Scabies in childhood and adolescence].
Fölster-Holst, R; Sunderkötter, C
2016-12-01
Scabies is a common parasitosis, occurring worldwide and at any age that impairs the quality of life of patients and their families by the itching and the stigmatization. The main aims of therapy are destroying the mites Sarcoptes scabiei var. hominis that infect the stratum corneum and reduction of the itching. This requires detailed clarification for the patients and the parents of affected children as the instructions for the use of the drugs and also the necessary additional steps are often not correctly followed. Many special features regarding the clinical symptoms and therapy in early childhood should be taken into consideration.
Rep. Collins, Doug [R-GA-9
2018-05-24
House - 05/24/2018 Referred to the Committee on Energy and Commerce, and in addition to the Committee on Ways and Means, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee... (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Lippmann, M.
1964-04-01
A cascade particle impactor capable of collecting particles and distributing them according to size is described. In addition the device is capable of collecting on a pair of slides a series of different samples so that less time is required for the changing of slides. Other features of the device are its compactness and its ruggedness making it useful under field conditions. Essentially the unit consists of a main body with a series of transverse jets discharging on a pair of parallel, spaced glass plates. The plates are capable of being moved incremental in steps to obtain the multiple samples. (AEC)
Loganathan, Kavithaa; Chelme-Ayala, Pamela; El-Din, Mohamed Gamal
2015-03-15
Membrane filtration is an effective treatment method for oil sands tailings pond recycle water (RCW); however, membrane fouling and rapid decrease in permeate flux caused by colloids, organic matter, and bitumen residues present in the RCW hinder its successful application. This pilot-scale study investigated the impact of different pretreatment steps on the performance of a ceramic ultrafiltration (CUF) membrane used for the treatment of RCW. Two treatment trains were examined: treatment train 1 consisted of coagulant followed by a CUF system, while treatment train 2 included softening (Multiflo™ system) and coagulant addition, followed by a CUF system. The results indicated that minimum pretreatment (train 1) was required for almost complete solids removal. The addition of a softening step (train 2) provided an additional barrier to membrane fouling by reducing hardness-causing ions to negligible levels. More than 99% removal of turbidity and less than 20% removal of total organic carbon were achieved regardless of the treatment train used. Permeate fluxes normalized at 20 °C of 127-130 L/m(2) h and 111-118 L/m(2) h, with permeate recoveries of 90-93% and 90-94% were observed for the treatment trains 1 and 2, respectively. It was also found that materials deposited onto the membrane surface had an impact on trans-membrane pressure and influenced the required frequencies of chemically enhanced backwashes (CEBs) and clean-in-place (CIP) procedures. The CIP performed was successful in removing fouling and scaling materials such that the CUF performance was restored to baseline levels. The results also demonstrated that due to their low turbidity and silt density index values, permeates produced in this pilot study were suitable for further treatment by high pressure membrane processes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Faria, Eliney F; Caputo, Peter A; Wood, Christopher G; Karam, Jose A; Nogueras-González, Graciela M; Matin, Surena F
2014-02-01
Laparoscopic and robotic partial nephrectomy (LPN and RPN) are strongly related to influence of tumor complexity and learning curve. We analyzed a consecutive experience between RPN and LPN to discern if warm ischemia time (WIT) is in fact improved while accounting for these two confounding variables and if so by which particular aspect of WIT. This is a retrospective analysis of consecutive procedures performed by a single surgeon between 2002-2008 (LPN) and 2008-2012 (RPN). Specifically, individual steps, including tumor excision, suturing of intrarenal defect, and parenchyma, were recorded at the time of surgery. Multivariate and univariate analyzes were used to evaluate influence of learning curve, tumor complexity, and time kinetics of individual steps during WIT, to determine their influence in WIT. Additionally, we considered the effect of RPN on the learning curve. A total of 146 LPNs and 137 RPNs were included. Considering renal function, WIT, suturing time, renorrhaphy time were found statistically significant differences in favor of RPN (p < 0.05). In the univariate analysis, surgical procedure, learning curve, clinical tumor size, and RENAL nephrometry score were statistically significant predictors for WIT (p < 0.05). RPN decreased the WIT on average by approximately 7 min compared to LPN even when adjusting for learning curve, tumor complexity, and both together (p < 0.001). We found RPN was associated with a shorter WIT when controlling for influence of the learning curve and tumor complexity. The time required for tumor excision was not shortened but the time required for suturing steps was significantly shortened.
NASA Astrophysics Data System (ADS)
Tian, Yaolan; Isotalo, Tero J.; Konttinen, Mikko P.; Li, Jiawei; Heiskanen, Samuli; Geng, Zhuoran; Maasilta, Ilari J.
2017-02-01
We demonstrate a method to fabricate narrow, down to a few micron wide metallic leads on top of a three-dimensional (3D) colloidal crystal self-assembled from polystyrene (PS) nanospheres of diameter 260 nm, using electron-beam lithography. This fabrication is not straightforward due to the fact that PS nanospheres cannot usually survive the harsh chemical treatments required in the development and lift-off steps of electron-beam lithography. We solve this problem by increasing the chemical resistance of the PS nanospheres using an additional electron-beam irradiation step, which allows the spheres to retain their shape and their self-assembled structure, even after baking to a temperature of 160 °C, the exposure to the resist developer and the exposure to acetone, all of which are required for the electron-beam lithography step. Moreover, we show that by depositing an aluminum oxide capping layer on top of the colloidal crystal after the e-beam irradiation, the surface is smooth enough so that continuous metal wiring can be deposited by the electron-beam lithography. Finally, we also demonstrate a way to self-assemble PS colloidal crystals into a microscale container, which was fabricated using direct-write 3D laser-lithography. Metallic wiring was also successfully integrated with the combination of a container structure and a PS colloidal crystal. Our goal is to make a device for studies of thermal transport in 3D phononic crystals, but other phononic or photonic crystal applications could also be envisioned.
Fujita, Yuichi; Tsujimoto, Ryoma; Aoki, Rina
2015-01-01
Chlorophyll a (Chl) is a light-absorbing tetrapyrrole pigment that is essential for photosynthesis. The molecule is produced from glutamate via a complex biosynthetic pathway comprised of at least 15 enzymatic steps. The first half of the Chl pathway is shared with heme biosynthesis, and the latter half, called the Mg-branch, is specific to Mg-containing Chl a. Bilin pigments, such as phycocyanobilin, are additionally produced from heme, so these light-harvesting pigments also share many common biosynthetic steps with Chl biosynthesis. Some of these common steps in the biosynthetic pathways of heme, Chl and bilins require molecular oxygen for catalysis, such as oxygen-dependent coproporphyrinogen III oxidase. Cyanobacteria thrive in diverse environments in terms of oxygen levels. To cope with Chl deficiency caused by low-oxygen conditions, cyanobacteria have developed elaborate mechanisms to maintain Chl production, even under microoxic environments. The use of enzymes specialized for low-oxygen conditions, such as oxygen-independent coproporphyrinogen III oxidase, constitutes part of a mechanism adapted to low-oxygen conditions. Another mechanism adaptive to hypoxic conditions is mediated by the transcriptional regulator ChlR that senses low oxygen and subsequently activates the transcription of genes encoding enzymes that work under low-oxygen tension. In diazotrophic cyanobacteria, this multilayered regulation also contributes in Chl biosynthesis by supporting energy production for nitrogen fixation that also requires low-oxygen conditions. We will also discuss the evolutionary implications of cyanobacterial tetrapyrrole biosynthesis and regulation, because low oxygen-type enzymes also appear to be evolutionarily older than oxygen-dependent enzymes. PMID:25830590
A multi-scale convolutional neural network for phenotyping high-content cellular images.
Godinez, William J; Hossain, Imtiaz; Lazic, Stanley E; Davies, John W; Zhang, Xian
2017-07-01
Identifying phenotypes based on high-content cellular images is challenging. Conventional image analysis pipelines for phenotype identification comprise multiple independent steps, with each step requiring method customization and adjustment of multiple parameters. Here, we present an approach based on a multi-scale convolutional neural network (M-CNN) that classifies, in a single cohesive step, cellular images into phenotypes by using directly and solely the images' pixel intensity values. The only parameters in the approach are the weights of the neural network, which are automatically optimized based on training images. The approach requires no a priori knowledge or manual customization, and is applicable to single- or multi-channel images displaying single or multiple cells. We evaluated the classification performance of the approach on eight diverse benchmark datasets. The approach yielded overall a higher classification accuracy compared with state-of-the-art results, including those of other deep CNN architectures. In addition to using the network to simply obtain a yes-or-no prediction for a given phenotype, we use the probability outputs calculated by the network to quantitatively describe the phenotypes. This study shows that these probability values correlate with chemical treatment concentrations. This finding validates further our approach and enables chemical treatment potency estimation via CNNs. The network specifications and solver definitions are provided in Supplementary Software 1. william_jose.godinez_navarro@novartis.com or xian-1.zhang@novartis.com. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Energy cost of stepping in place while watching television commercials.
Steeves, Jeremy A; Thompson, Dixie L; Bassett, David R
2012-02-01
Modifying sedentary television (TV) watching behaviors by stepping in place during commercials (TV commercial stepping) could increase physical activity and energy expenditure. The study's purpose was to determine the energy cost of TV commercial stepping and to quantify the amount of activity (number of steps and minutes) performed during 1 h of TV commercial stepping. In part 1, 23 adults (27.8 ± 7.0 yr) had their energy expenditure measured at rest, sitting, standing, stepping in place, and walking at 3.0 mph on the treadmill. The second part of this study involved 1 h of sedentary TV viewing and 1 h of TV commercial stepping. Actual steps were counted with a hand tally counter. There were no differences (P = 0.76) between the caloric requirements of reclining rest (79 ± 16 kcal·h(-1)) and sedentary TV viewing (81 ± 19 kcal·h(-1)). However, stepping in place (258 ± 76 kcal·h(-1)), walking at 3.0 mph on the treadmill (304 ± 71 kcal·h(-1)), and 1 h of TV commercial stepping (148 ± 40 kcal·h(-1)) had a higher caloric requirement than either reclining rest or sedentary TV viewing (P < 0.001). One hour of TV commercial stepping resulted in an average of 25.2 ± 2.6 min of physical activity and 2111 ± 253 steps. Stepping in place during commercials can increase the energy cost and amount of activity performed during TV viewing.
Two-step chlorination: A new approach to disinfection of a primary sewage effluent.
Li, Yu; Yang, Mengting; Zhang, Xiangru; Jiang, Jingyi; Liu, Jiaqi; Yau, Cie Fu; Graham, Nigel J D; Li, Xiaoyan
2017-01-01
Sewage disinfection aims at inactivating pathogenic microorganisms and preventing the transmission of waterborne diseases. Chlorination is extensively applied for disinfecting sewage effluents. The objective of achieving a disinfection goal and reducing disinfectant consumption and operational costs remains a challenge in sewage treatment. In this study, we have demonstrated that, for the same chlorine dosage, a two-step addition of chlorine (two-step chlorination) was significantly more efficient in disinfecting a primary sewage effluent than a one-step addition of chlorine (one-step chlorination), and shown how the two-step chlorination was optimized with respect to time interval and dosage ratio. Two-step chlorination of the sewage effluent attained its highest disinfection efficiency at a time interval of 19 s and a dosage ratio of 5:1. Compared to one-step chlorination, two-step chlorination enhanced the disinfection efficiency by up to 0.81- or even 1.02-log for two different chlorine doses and contact times. An empirical relationship involving disinfection efficiency, time interval and dosage ratio was obtained by best fitting. Mechanisms (including a higher overall Ct value, an intensive synergistic effect, and a shorter recovery time) were proposed for the higher disinfection efficiency of two-step chlorination in the sewage effluent disinfection. Annual chlorine consumption costs in one-step and two-step chlorination of the primary sewage effluent were estimated. Compared to one-step chlorination, two-step chlorination reduced the cost by up to 16.7%. Copyright © 2016 Elsevier Ltd. All rights reserved.
Perez, Miguel A; Sudweeks, Jeremy D; Sears, Edie; Antin, Jonathan; Lee, Suzanne; Hankey, Jonathan M; Dingus, Thomas A
2017-06-01
Understanding causal factors for traffic safety-critical events (e.g., crashes and near-crashes) is an important step in reducing their frequency and severity. Naturalistic driving data offers unparalleled insight into these factors, but requires identification of situations where crashes are present within large volumes of data. Sensitivity and specificity of these identification approaches are key to minimizing the resources required to validate candidate crash events. This investigation used data from the Second Strategic Highway Research Program Naturalistic Driving Study (SHRP 2 NDS) and the Canada Naturalistic Driving Study (CNDS) to develop and validate different kinematic thresholds that can be used to detect crash events. Results indicate that the sensitivity of many of these approaches can be quite low, but can be improved by selecting particular threshold levels based on detection performance. Additional improvements in these approaches are possible, and may involve leveraging combinations of different detection approaches, including advanced statistical techniques and artificial intelligence approaches, additional parameter modifications, and automation of validation processes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Evaluation of Braided Stiffener Concepts for Transport Aircraft Wing Structure Applications
NASA Technical Reports Server (NTRS)
Deaton, Jerry W.; Dexter, H. Benson (Editor); Markus, Alan; Rohwer, Kim
1995-01-01
Braided composite materials have potential for application in aircraft structures. Stiffeners, wing spars, floor beams, and fuselage frames are examples where braided composites could find application if cost effective processing and damage requirements are met. Braiding is an automated process for obtaining near-net shape preforms for fabrication of components for structural applications. Previous test results on braided composite materials obtained at NASA Langley indicate that damage tolerance requirements can be met for some applications. In addition, the braiding industry is taking steps to increase the material through-put to be more competitive with other preform fabrication processes. Data are presented on the compressive behavior of three braided stiffener preform fabric constructions as determined from individual stiffener crippling test and three stiffener wide panel tests. Stiffener and panel fabrication are described and compression data presented for specimens tested with and without impact damage. In addition, data are also presented on the compressive behavior of the stitched stiffener preform construction currently being used by McDonnell Douglas Aerospace in the NASA ACT wing development program.
NASA Astrophysics Data System (ADS)
Cekli, Hakki Ergun; Nije, Jelle; Ypma, Alexander; Bastani, Vahid; Sonntag, Dag; Niesing, Henk; Zhang, Linmiao; Ullah, Zakir; Subramony, Venky; Somasundaram, Ravin; Susanto, William; Matsunobu, Masazumi; Johnson, Jeff; Tabery, Cyrus; Lin, Chenxi; Zou, Yi
2018-03-01
In addition to lithography process and equipment induced variations, processes like etching, annealing, film deposition and planarization exhibit variations, each having their own intrinsic characteristics and leaving an effect, a `fingerprint', on the wafers. With ever tighter requirements for CD and overlay, controlling these process induced variations is both increasingly important and increasingly challenging in advanced integrated circuit (IC) manufacturing. For example, the on-product overlay (OPO) requirement for future nodes is approaching <3nm, requiring the allowable budget for process induced variance to become extremely small. Process variance control is seen as an bottleneck to further shrink which drives the need for more sophisticated process control strategies. In this context we developed a novel `computational process control strategy' which provides the capability of proactive control of each individual wafer with aim to maximize the yield, without introducing a significant impact on metrology requirements, cycle time or productivity. The complexity of the wafer process is approached by characterizing the full wafer stack building a fingerprint library containing key patterning performance parameters like Overlay, Focus, etc. Historical wafer metrology is decomposed into dominant fingerprints using Principal Component Analysis. By associating observed fingerprints with their origin e.g. process steps, tools and variables, we can give an inline assessment of the strength and origin of the fingerprints on every wafer. Once the fingerprint library is established, a wafer specific fingerprint correction recipes can be determined based on its processing history. Data science techniques are used in real-time to ensure that the library is adaptive. To realize this concept, ASML TWINSCAN scanners play a vital role with their on-board full wafer detection and exposure correction capabilities. High density metrology data is created by the scanner for each wafer and on every layer during the lithography steps. This metrology data will be used to obtain the process fingerprints. Also, the per exposure and per wafer correction potential of the scanners will be utilized for improved patterning control. Additionally, the fingerprint library will provide early detection of excursions for inline root cause analysis and process optimization guidance.
Quarles, C Derrick; Randunu, K Manoj; Brumaghim, Julia L; Marcus, R Kenneth
2011-10-01
The analysis of metal-binding proteins requires careful sample manipulation to ensure that the metal-protein complex remains in its native state and the metal retention is preserved during sample preparation or analysis. Chemical analysis for the metal content in proteins typically involves some type of liquid chromatography/electrophoresis separation step coupled with an atomic (i.e., inductively coupled plasma-optical emission spectroscopy or -mass spectrometry) or molecular (i.e., electrospray ionization-mass spectrometry) analysis step that requires altered-solvent introduction techniques. UV-VIS absorbance is employed here to monitor the iron content in human holo-transferrin (Tf) under various solvent conditions, changing polarity, pH, ionic strength, and the ionic and hydrophobic environment of the protein. Iron loading percentages (i.e. 100% loading equates to 2 Fe(3+):1 Tf) were quantitatively determined to evaluate the effect of solvent composition on the retention of Fe(3+) in Tf. Maximum retention of Fe(3+) was found in buffered (20 mM Tris) solutions (96 ± 1%). Exposure to organic solvents and deionized H(2)O caused release of ~23-36% of the Fe(3+) from the binding pocket(s) at physiological pH (7.4). Salt concentrations similar to separation conditions used for ion exchange had little to no effect on Fe(3+) retention in holo-Tf. Unsurprisingly, changes in ionic strength caused by additions of guanidine HCl (0-10 M) to holo-Tf resulted in unfolding of the protein and loss of Fe(3+) from Tf; however, denaturing and metal loss was found not to be an instantaneous process for additions of 1-5 M guanidinium to Tf. In contrast, complete denaturing and loss of Fe(3+) was instantaneous with ≥6 M additions of guanidinium, and denaturing and loss of iron from Tf occurred in parallel proportions. Changes to the hydrophobicity of Tf (via addition of 0-14 M urea) had less effect on denaturing and release of Fe(3+) from the Tf binding pocket compared to changes in ionic strength. This journal is © The Royal Society of Chemistry 2011
NASA Astrophysics Data System (ADS)
Garcia-Pintado, J.; Barberá, G. G.; Erena Arrabal, M.; Castillo, V. M.
2010-12-01
Objective analysis schemes (OAS), also called ``succesive correction methods'' or ``observation nudging'', have been proposed for multisensor precipitation estimation combining remote sensing data (meteorological radar or satellite) with data from ground-based raingauge networks. However, opposite to the more complex geostatistical approaches, the OAS techniques for this use are not optimized. On the other hand, geostatistical techniques ideally require, at the least, modelling the covariance from the rain gauge data at every time step evaluated, which commonly cannot be soundly done. Here, we propose a new procedure (concurrent multiplicative-additive objective analysis scheme [CMA-OAS]) for operational rainfall estimation using rain gauges and meteorological radar, which does not require explicit modelling of spatial covariances. On the basis of a concurrent multiplicative-additive (CMA) decomposition of the spatially nonuniform radar bias, within-storm variability of rainfall and fractional coverage of rainfall are taken into account. Thus both spatially nonuniform radar bias, given that rainfall is detected, and bias in radar detection of rainfall are handled. The interpolation procedure of CMA-OAS is built on the OAS, whose purpose is to estimate a filtered spatial field of the variable of interest through a successive correction of residuals resulting from a Gaussian kernel smoother applied on spatial samples. The CMA-OAS, first, poses an optimization problem at each gauge-radar support point to obtain both a local multiplicative-additive radar bias decomposition and a regionalization parameter. Second, local biases and regionalization parameters are integrated into an OAS to estimate the multisensor rainfall at the ground level. The approach considers radar estimates as background a priori information (first guess), so that nudging to observations (gauges) may be relaxed smoothly to the first guess, and the relaxation shape is obtained from the sequential optimization. The procedure is suited to relatively sparse rain gauge networks. To show the procedure, six storms are analyzed at hourly steps over 10,663 km2. Results generally indicated an improved quality with respect to other methods evaluated: a standard mean-field bias adjustment, an OAS spatially variable adjustment with multiplicative factors, ordinary cokriging, and kriging with external drift. In theory, it could be equally applicable to gauge-satellite estimates and other hydrometeorological variables.
77 FR 48423 - Airworthiness Directives; The Boeing Company Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-14
... Company Model 737-500 series airplanes. This AD was prompted by reports of chem-mill step cracking on the aft lower lobe fuselage skins. This AD requires inspections of the fuselage skin at the chem- mill... 22686). That NPRM proposed to require inspections of the fuselage skin at the chem-mill steps, and...
DOT National Transportation Integrated Search
1993-10-01
This document describes the Concept of Operations and Generic System Requirements for : the next generation of Traffic Management Centers (TMC). Four major steps comprise the : development of this Concept of Operations. The first step was to survey t...
Elucidating nitric oxide synthase domain interactions by molecular dynamics.
Hollingsworth, Scott A; Holden, Jeffrey K; Li, Huiying; Poulos, Thomas L
2016-02-01
Nitric oxide synthase (NOS) is a multidomain enzyme that catalyzes the production of nitric oxide (NO) by oxidizing L-Arg to NO and L-citrulline. NO production requires multiple interdomain electron transfer steps between the flavin mononucleotide (FMN) and heme domain. Specifically, NADPH-derived electrons are transferred to the heme-containing oxygenase domain via the flavin adenine dinucleotide (FAD) and FMN containing reductase domains. While crystal structures are available for both the reductase and oxygenase domains of NOS, to date there is no atomic level structural information on domain interactions required for the final FMN-to-heme electron transfer step. Here, we evaluate a model of this final electron transfer step for the heme-FMN-calmodulin NOS complex based on the recent biophysical studies using a 105-ns molecular dynamics trajectory. The resulting equilibrated complex structure is very stable and provides a detailed prediction of interdomain contacts required for stabilizing the NOS output state. The resulting equilibrated complex model agrees well with previous experimental work and provides a detailed working model of the final NOS electron transfer step required for NO biosynthesis. © 2015 The Protein Society.
40 CFR 35.935-4 - Step 2+3 projects.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Step 2+3 projects. 35.935-4 Section 35... STATE AND LOCAL ASSISTANCE Grants for Construction of Treatment Works-Clean Water Act § 35.935-4 Step 2+3 projects. A grantee which has received step 2=3 grant assistance must make submittals required by...
40 CFR 60.1060 - What steps must I complete for my materials separation plan?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What steps must I complete for my... Requirements: Materials Separation Plan § 60.1060 What steps must I complete for my materials separation plan? (a) For your materials separation plan, you must complete nine steps: (1) Prepare a draft materials...
Sequential addition reactions of two molecules of Grignard reagents to thioformamides.
Murai, Toshiaki; Ui, Kazuki; Narengerile
2009-08-07
Sequential addition reactions of two molecules of Grignard reagents to thioformamides were found to yield tertiary amines in an efficient manner. The addition of two different Grignard reagents can be accomplished by using one equivalent of arylmagnesium reagent in the first step. In the second step, a variety of reagents such as alkyl, alkenyl, aryl, and alkynyl reagents were used to afford the corresponding amines in good to high yields.
Keat, R M; Thomas, M; McKechnie, A
2017-05-01
Sedentary behaviour is widely associated with deleterious health outcomes that in modern medicine have similar connotations to smoking tobacco and alcohol misuse. The integration of e-portfolio, e-logbook, British National Formulary (BNF) and encrypted emails has made smartphones a necessity for trainees. Smartphones also have the ability to record the amount of exercise taken, which allows activity at work to be monitored. The aim of this study to compare the activity of the same group of dental core trainees when they worked within a large multisite teaching hospital and a smaller district general hospital, to find out if supplementary activity was needed outside work. Data were collected from smartphones. To ensure continuity, data were collected only from those who had calibrated iPhones (n=10). At the teaching hospital six of the trainees walked over 10 000 steps a day while working (mean (SD) 10 004 (639)). At the district hospital none of the trainees walked 10 000 steps. The mean (SD) number of steps completed by all trainees was 6265 (119). Walking at work provides the full quota of recommended daily exercise most of the time for those working in the teaching hospital, but additional exercise is occasionally required. While working at the district hospital they walk less, meaning that they should try to increase their activity outside work. Trainees working in the teaching hospital walk significantly more steps than in the district hospital. Copyright © 2016 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Fostering research aptitude among high school students through space weather competition
NASA Astrophysics Data System (ADS)
Abdullah, M.; Majid, R. A.; Bais, B.; Bahri, N. S.; Asillam, M. F.
2018-01-01
Cultivating research culture at an early stage is important for capacity building in a community. The high school level is the appropriate stage for research to be introduced because of students' competitive nature. Participation in the space weather competition is one of the ways in which research aptitude can be fostered in high school students in Malaysia. Accordingly, this paper presents how research elements were introduced to the students at the high school level through their participation in the space weather competition. The competition required the students to build a system to detect the presence of solar flares by utilizing VLF signals reflected from the ionosphere. The space weather competition started off with proposal writing for the space weather related project where the students were required to execute extensive literature review on the given topic. Additionally, the students were also required to conduct the experiments and analyse the data. Results obtained from data analysis were then validated by the students through various other observations that they had to carry out. At the end of the competition, students were expected to write a comprehensive technical report. Through this competition, the students learnt how to conduct research in accordance to the guidelines provided through the step by step approach exposed to them. Ultimately, this project revealed that the students were able to conduct research on their own with minimal guidance and that participation in the competition not only generated enjoyment in learning but also their interest in science and research.
Laforteza, Brian N.; Pickworth, Mark
2014-01-01
More cycling–fewer steps The first enantioselective total synthesis of (−)-minovincine has been accomplished in nine chemical steps and 13% overall yield. A novel, one-step Diels–Alder/β-elimination/conjugate addition organocascade sequence allowed rapid access to the central tetracyclic core in an asymmetric manner. PMID:24000234
46 CFR 171.073 - Treatment of stepped and recessed bulkheads in Type II subdivision.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 7 2010-10-01 2010-10-01 false Treatment of stepped and recessed bulkheads in Type II... Treatment of stepped and recessed bulkheads in Type II subdivision. (a) A main transverse watertight bulkhead may not be stepped unless additional watertight bulkheads are located as shown in Figure 171.067(a...
Wagner, David W; Reed, Matthew P; Chaffin, Don B
2010-11-01
Accurate prediction of foot placements in relation to hand locations during manual materials handling tasks is critical for prospective biomechanical analysis. To address this need, the effects of lifting task conditions and anthropometric variables on foot placements were studied in a laboratory experiment. In total, 20 men and women performed two-handed object transfers that required them to walk to a shelf, lift an object from the shelf at waist height and carry the object to a variety of locations. Five different changes in the direction of progression following the object pickup were used, ranging from 45° to 180° relative to the approach direction. Object weights of 1.0 kg, 4.5 kg, 13.6 kg were used. Whole-body motions were recorded using a 3-D optical retro-reflective marker-based camera system. A new parametric system for describing foot placements, the Quantitative Transition Classification System, was developed to facilitate the parameterisation of foot placement data. Foot placements chosen by the subjects during the transfer tasks appeared to facilitate a change in the whole-body direction of progression, in addition to aiding in performing the lift. Further analysis revealed that five different stepping behaviours accounted for 71% of the stepping patterns observed. More specifically, the most frequently observed behaviour revealed that the orientation of the lead foot during the actual lifting task was primarily affected by the amount of turn angle required after the lift (R(2) = 0.53). One surprising result was that the object mass (scaled by participant body mass) was not found to significantly affect any of the individual step placement parameters. Regression models were developed to predict the most prevalent step placements and are included in this paper to facilitate more accurate human motion simulations and ergonomics analyses of manual material lifting tasks. STATEMENT OF RELEVANCE: This study proposes a method for parameterising the steps (foot placements) associated with manual material handling tasks. The influence of task conditions and subject anthropometry on the foot placements of the most frequently observed stepping pattern during a laboratory study is discussed. For prospective postural analyses conducted using digital human models, accurate prediction of the foot placements is critical to realistic postural analyses and improved biomechanical job evaluations.
Almutairy, B K; Alshetaili, A S; Ashour, E A; Patil, H; Tiwari, R V; Alshehri, S M; Repka, M A
2016-03-01
The present study aimed to develop a continuous single-step manufacturing platform to prepare a porous, low-density, and floating multi-particulate system (mini-tablet, 4 mm size). This process involves injecting inert, non-toxic pressurized CO₂gas (P-CO₂) in zone 4 of a 16-mm hot-melt extruder (HME) to continuously generate pores throughout the carrier matrix. Unlike conventional methods for preparing floating drug delivery systems, additional chemical excipients and additives are not needed in this approach to create minute openings on the surface of the matrices. The buoyancy efficiency of the prepared floating system (injection of P-CO₂) in terms of lag time (0 s) significantly improved (P < 0.05), compared to the formulation prepared by adding the excipient sodium bicarbonate (lag time 120 s). The main advantages of this novel manufacturing technique include: (i) no additional chemical excipients need to be incorporated in the formulation, (ii) few manufacturing steps are required, (iii) high buoyancy efficiency is attained, and (iv) the extrudate is free of toxic solvent residues. Floating mini-tablets containing acetaminophen (APAP) as a model drug within the matrix-forming carrier (Eudragit® RL PO) have been successfully processed via this combined technique (P-CO₂/HME). Desired controlled release profile of APAP from the polymer Eudragit® RL PO is attained in the optimized formulation, which remains buoyant on the surface of gastric fluids prior to gastric emptying time (average each 4 h).
Owens, Douglas K; Qaseem, Amir; Chou, Roger; Shekelle, Paul
2011-02-01
Health care costs in the United States are increasing unsustainably, and further efforts to control costs are inevitable and essential. Efforts to control expenditures should focus on the value, in addition to the costs, of health care interventions. Whether an intervention provides high value depends on assessing whether its health benefits justify its costs. High-cost interventions may provide good value because they are highly beneficial; conversely, low-cost interventions may have little or no value if they provide little benefit. Thus, the challenge becomes determining how to slow the rate of increase in costs while preserving high-value, high-quality care. A first step is to decrease or eliminate care that provides no benefit and may even be harmful. A second step is to provide medical interventions that provide good value: medical benefits that are commensurate with their costs. This article discusses 3 key concepts for understanding how to assess the value of health care interventions. First, assessing the benefits, harms, and costs of an intervention is essential to understand whether it provides good value. Second, assessing the cost of an intervention should include not only the cost of the intervention itself but also any downstream costs that occur because the intervention was performed. Third, the incremental cost-effectiveness ratio estimates the additional cost required to obtain additional health benefits and provides a key measure of the value of a health care intervention.
Specific arithmetic calculation deficits in children with Turner syndrome.
Rovet, J; Szekely, C; Hockenberry, M N
1994-12-01
Study 1 compared arithmetic processing skills on the WRAT-R in 45 girls with Turner syndrome (TS) and 92 age-matched female controls. Results revealed significant underachievement by subjects with TS, which reflected their poorer performance on problems requiring the retrieval of addition and multiplication facts and procedural knowledge for addition and division operations. TS subjects did not differ qualitatively from controls in type of procedural error committed. Study 2, which compared the performance of 10 subjects with TS and 31 controls on the Keymath Diagnostic Arithmetic Test, showed that the TS group had less adequate knowledge of arithmetic, subtraction, and multiplication procedures but did not differ from controls on Fact items. Error analyses revealed that TS subjects were more likely to confuse component steps or fail to separate intermediate steps or to complete problems. TS subjects relied to a greater degree on verbal than visual-spatial abilities in arithmetic processing while their visual-spatial abilities were associated with retrieval of simple multidigit addition facts and knowledge of subtraction, multiplication, and division procedures. Differences between the TS and control groups increased with age for Keymath, but not WRAT-R, procedures. Discrepant findings are related to the different task constraints (timed vs. untimed, single vs. alternate versions, size of item pool) and the use of different strategies (counting vs. fact retrieval). It is concluded that arithmetic difficulties in females with TS are due to less adequate procedural skills, combined with poorer fact retrieval in timed testing situations, rather than to inadequate visual-spatial abilities.
Impact of modellers' decisions on hydrological a priori predictions
NASA Astrophysics Data System (ADS)
Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.
2014-06-01
In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of added information. In this qualitative analysis of a statistically small number of predictions we learned (i) that soft information such as the modeller's system understanding is as important as the model itself (hard information), (ii) that the sequence of modelling steps matters (field visit, interactions between differently experienced experts, choice of model, selection of available data, and methods for parameter guessing), and (iii) that added process understanding can be as efficient as adding data for improving parameters needed to satisfy model requirements.
Takayanagi, Naoto; Sudo, Motoki; Fujii, Masahiko; Sakai, Hirokazu; Morimoto, Keiko; Tomisaki, Masumi; Niki, Yoshifumi; Tokimitsu, Ichiro
2018-03-01
[Purpose] This study evaluated gait parameters and foot pressure in two regions of the feet among older females with different personal care support needs to analyze factors that contribute to higher support requirements. [Subjects and Methods] Thirty-two older females were divided into support-need and care-need level groups. Gait parameters (speed, cadence, step length, step width, gait angle, toe angle, double support phase, swing phase, and stance phase) and foot pressure during a 5-m walk were measured and analyzed in the two groups. [Results] The percentage of the double support phase on both feet and the right stance phase were significantly higher in the care-need level group, while that of the right swing phase was significantly lower than that of the support-need level group. Additionally, the phase showing peak pressure on the left rear foot was significantly delayed and the left forefoot pressure in the terminal stance was significantly lower in the care-need level group than in the support-need level group. [Conclusion] These findings show that the temporal duration parameters and foot pressure on a particular side were significantly different between the two groups and suggest that these differences were associated with a higher care level.
Forging a morphological system out of two dimensions: Agentivity and number
Horton, L.; Goldin-Meadow, S.; Coppola, M.; Senghas, A.; Brentari, D.
2015-01-01
Languages have diverse strategies for marking agentivity and number. These strategies are negotiated to create combinatorial systems. We consider the emergence of these strategies by studying features of movement in a young sign language in Nicaragua (NSL). We compare two age cohorts of Nicaraguan signers (NSL1 and NSL2), adult homesigners in Nicaragua (deaf individuals creating a gestural system without linguistic input), signers of American and Italian Sign Languages (ASL and LIS), and hearing individuals asked to gesture silently. We find that all groups use movement axis and repetition to encode agentivity and number, suggesting that these properties are grounded in action experiences common to all participants. We find another feature – unpunctuated repetition – in the sign systems (ASL, LIS, NSL, Homesign) but not in silent gesture. Homesigners and NSL1 signers use the unpunctuated form, but limit its use to No-Agent contexts; NSL2 signers use the form across No-Agent and Agent contexts. A single individual can thus construct a marker for number without benefit of a linguistic community (homesign), but generalizing this form across agentive conditions requires an additional step. This step does not appear to be achieved when a linguistic community is first formed (NSL1), but requires transmission across generations of learners (NSL2). PMID:26740937
Moriggi, Giulia; Nieto, Blanca; Dosil, Mercedes
2014-12-01
During the biogenesis of small ribosomal subunits in eukaryotes, the pre-40S particles formed in the nucleolus are rapidly transported to the cytoplasm. The mechanisms underlying the nuclear export of these particles and its coordination with other biogenesis steps are mostly unknown. Here we show that yeast Rrp12 is required for the exit of pre-40S particles to the cytoplasm and for proper maturation dynamics of upstream 90S pre-ribosomes. Due to this, in vivo elimination of Rrp12 leads to an accumulation of nucleoplasmic 90S to pre-40S transitional particles, abnormal 35S pre-rRNA processing, delayed elimination of processing byproducts, and no export of intermediate pre-40S complexes. The exportin Crm1 is also required for the same pre-ribosome maturation events that involve Rrp12. Thus, in addition to their implication in nuclear export, Rrp12 and Crm1 participate in earlier biosynthetic steps that take place in the nucleolus. Our results indicate that, in the 40S subunit synthesis pathway, the completion of early pre-40S particle assembly, the initiation of byproduct degradation and the priming for nuclear export occur in an integrated manner in late 90S pre-ribosomes.
Tributyltin-induced apoptosis requires glycolytic adenosine trisphosphate production.
Stridh, H; Fava, E; Single, B; Nicotera, P; Orrenius, S; Leist, M
1999-10-01
The toxicity of tributyltin chloride (TBT) involves Ca(2+) overload, cytoskeletal damage, and mitochondrial failure leading to cell death by apoptosis or necrosis. Here, we examined whether the intracellular ATP level modulates the mode of cell death after exposure to TBT. When Jurkat cells were energized by the mitochondrial substrate, pyruvate, low concentrations of TBT (1-2 microM) triggered an immediate depletion of intracellular ATP followed by necrotic death. When ATP levels were maintained by the addition of glucose, the mode of cell death was typically apoptotic. Glycolytic ATP production was required for apoptosis at two distinct steps. First, maintenance of adequate ATP levels accelerated the decrease of mitochondrial membrane potential, and the release of the intermembrane proteins adenylate kinase and cytochrome c from mitochondria. A possible role of the adenine nucleotide exchanger in this first ATP-dependent step is suggested by experiments performed with the specific inhibitor, bongkrekic acid. This substance delayed cytochrome c release in a manner similar to that caused by ATP depletion. Second, caspase activation following cytochrome c release was only observed in ATP-containing cells. Bcl-2 had only a minor effect on TBT-triggered caspase activation or cell death. We conclude that intracellular ATP concentrations control the mode of cell death in TBT-treated Jurkat cells at both the mitochondrial and caspase activation levels.
Forging a morphological system out of two dimensions: Agentivity and number.
Horton, L; Goldin-Meadow, S; Coppola, M; Senghas, A; Brentari, D
2015-12-01
Languages have diverse strategies for marking agentivity and number. These strategies are negotiated to create combinatorial systems. We consider the emergence of these strategies by studying features of movement in a young sign language in Nicaragua (NSL). We compare two age cohorts of Nicaraguan signers (NSL1 and NSL2), adult homesigners in Nicaragua (deaf individuals creating a gestural system without linguistic input), signers of American and Italian Sign Languages (ASL and LIS), and hearing individuals asked to gesture silently. We find that all groups use movement axis and repetition to encode agentivity and number, suggesting that these properties are grounded in action experiences common to all participants. We find another feature - unpunctuated repetition - in the sign systems (ASL, LIS, NSL, Homesign) but not in silent gesture. Homesigners and NSL1 signers use the unpunctuated form, but limit its use to No-Agent contexts; NSL2 signers use the form across No-Agent and Agent contexts. A single individual can thus construct a marker for number without benefit of a linguistic community (homesign), but generalizing this form across agentive conditions requires an additional step. This step does not appear to be achieved when a linguistic community is first formed (NSL1), but requires transmission across generations of learners (NSL2).
Modifications to the Conduit Flow Process Mode 2 for MODFLOW-2005
Reimann, T.; Birk, S.; Rehrl, C.; Shoemaker, W.B.
2012-01-01
As a result of rock dissolution processes, karst aquifers exhibit highly conductive features such as caves and conduits. Within these structures, groundwater flow can become turbulent and therefore be described by nonlinear gradient functions. Some numerical groundwater flow models explicitly account for pipe hydraulics by coupling the continuum model with a pipe network that represents the conduit system. In contrast, the Conduit Flow Process Mode 2 (CFPM2) for MODFLOW-2005 approximates turbulent flow by reducing the hydraulic conductivity within the existing linear head gradient of the MODFLOW continuum model. This approach reduces the practical as well as numerical efforts for simulating turbulence. The original formulation was for large pore aquifers where the onset of turbulence is at low Reynolds numbers (1 to 100) and not for conduits or pipes. In addition, the existing code requires multiple time steps for convergence due to iterative adjustment of the hydraulic conductivity. Modifications to the existing CFPM2 were made by implementing a generalized power function with a user-defined exponent. This allows for matching turbulence in porous media or pipes and eliminates the time steps required for iterative adjustment of hydraulic conductivity. The modified CFPM2 successfully replicated simple benchmark test problems. ?? 2011 The Author(s). Ground Water ?? 2011, National Ground Water Association.
Rackauckas, Christopher; Nie, Qing
2017-01-01
Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs.
Rackauckas, Christopher
2017-01-01
Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs. PMID:29527134
NASA Astrophysics Data System (ADS)
Spencer, Harvey
2002-09-01
Helicopter mounted optical systems require compact packaging, good image performance (approaching the diffraction-limit), and must survive and operate in a rugged shock and thermal environment. The always-present requirement for low weight in an airborne sensor is paramount when considering the optical configuration. In addition, the usual list of optical requirements which must be satisfied within narrow tolerances, including field-of-view, vignetting, boresight, stray light rejection, and transmittance drive the optical design. It must be determined early in the engineering process which internal optical alignment adjustment provisions must be included, which may be included, and which will have to be omitted, since adding alignment features often conflicts with the requirement for optical component stability during operation and of course adds weight. When the system is to be modular and mates with another optical system, a telescope designed by different contractor in this case, additional alignment requirements between the two systems must be specified and agreed upon. Final delivered cost is certainly critical and "touch labor" assembly time must be determined and controlled. A clear plan for the alignment and assembly steps must be devised before the optical design can even begin to ensure that an arrangement of optical components amenable to adjustment is reached. The optical specification document should be written contemporaneously with the alignment plan to insure compatibility. The optics decisions that led to the success of this project are described and the final optical design is presented. A description of some unique pupil alignment adjustments, never performed by us in the infrared, is described.
Numerical investigation of internal high-speed viscous flows using a parabolic technique
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Power, G. D.
1985-01-01
A feasibility study has been conducted to assess the applicability of an existing parabolic analysis (ADD-Axisymmetric Diffuser Duct), developed previously for subsonic viscous internal flows, to mixed supersonic/subsonic flows with heat addition simulating a SCRAMJET combustor. A study was conducted with the ADD code modified to include additional convection effects in the normal momentum equation when supersonic expansion and compression waves are present. A set of test problems with weak shock and expansion waves have been analyzed with this modified ADD method and stable and accurate solutions were demonstrated provided the streamwise step size was maintained at levels larger than the boundary layer displacement thickness. Calculations made with further reductions in step size encountered departure solutions consistent with strong interaction theory. Calculations were also performed for a flow field with a flame front in which a specific heat release was imposed to simulate a SCRAMJET combustor. In this case the flame front generated relatively thick shear layers which aggravated the departure solution problem. Qualitatively correct results were obtained for these cases using a marching technique with the convective terms in the normal momentum equation suppressed. It is concluded from the present study that for the class of problems where strong viscous/inviscid interactions are present a global iteration procedure is required.
NASA Astrophysics Data System (ADS)
Xiong, Wenhao; Tian, Xin; Chen, Genshe; Pham, Khanh; Blasch, Erik
2017-05-01
Software defined radio (SDR) has become a popular tool for the implementation and testing for communications performance. The advantage of the SDR approach includes: a re-configurable design, adaptive response to changing conditions, efficient development, and highly versatile implementation. In order to understand the benefits of SDR, the space telecommunication radio system (STRS) was proposed by NASA Glenn research center (GRC) along with the standard application program interface (API) structure. Each component of the system uses a well-defined API to communicate with other components. The benefit of standard API is to relax the platform limitation of each component for addition options. For example, the waveform generating process can support a field programmable gate array (FPGA), personal computer (PC), or an embedded system. As long as the API defines the requirements, the generated waveform selection will work with the complete system. In this paper, we demonstrate the design and development of adaptive SDR following the STRS and standard API protocol. We introduce step by step the SDR testbed system including the controlling graphic user interface (GUI), database, GNU radio hardware control, and universal software radio peripheral (USRP) tranceiving front end. In addition, a performance evaluation in shown on the effectiveness of the SDR approach for space telecommunication.
A Cell Programmable Assay (CPA) chip.
Ju, Jongil; Warrick, Jay; Beebe, David J
2010-08-21
This article describes two kinds of "Cell Programmable Assay" (CPA) chips that utilize passive pumping for the culture and autonomous staining of cells to simply common protocols. One is a single timer channel CPA (sCPA) chip that has one timer channel and one main channel containing a cell culture chamber. The sCPA is used to culture and stain cells using Hoechst nuclear staining dye (a 2 step staining process). The other is a dual timer channel CPA (dCPA) chip that has two timer channels and one main channel with a chamber for cell culture. The dCPA is used here to culture, fix, permeablize, and stain cells using DAPI. The additional timer channel of the dCPA chip allows for automation of 3 steps. The CPA chips were successfully evaluated using HEK 293 cells. In addition, we provide a simplified equation for tuning or redesigning CPA chips to meet the needs of a variety of protocols that may require different timings. The equation is easy to use as it only depends upon the dimensions of microchannel and the volume of the reagent drops. The sCPA and dCPA chips can be readily modified to apply to a wide variety of common cell culture methods and procedures.
Influence of prepreg characteristics on stamp consolidation
NASA Astrophysics Data System (ADS)
Slange, T. K.; Warnet, L. L.; Grouve, W. J. B.; Akkerman, R.
2017-10-01
Stamp forming is a rapid manufacturing technology used to shape flat blanks of thermoplastic composite material into three-dimensional components. The development of automated lay-up technologies further extends the applicability of stamp forming by allowing rapid lay-up of tailored blanks and partial preconsolidation. This partial preconsolidation makes the influence of prepreg more critical compared to conventional preconsolidation methods which provide full preconsolidation. This paper aims to highlight consolidation challenges that can appear when stamp forming blanks manufactured by automated lay-up. Important prepreg characteristics were identified based on an experimental study where a comparison was made between various prepreg in their as-received, deconsolidated and stamp consolidated state. It was found that adding up small thickness variations across the width of a prepreg when stacking plies into a blank by automated lay-up can cause non-uniform consolidation. Additionally, deconsolidation of the prepreg does not seem to obstruct interlaminar bonding, while intralaminar voids initially present in a prepreg cannot be removed during stamp forming. An additional preconsolidation step after automated lay-up seems necessary to remove blank thickness variations and intralaminar voids for the current prepregs. Eliminating this process step and the successful combination of rapid automated lay-up and stamp forming requires prepregs which are void-free and have less thickness variation.
Alternative solutions for the bio-denitrification of landfill leachates using pine bark and compost.
Trois, Cristina; Pisano, Giulia; Oxarango, Laurent
2010-06-15
Nitrified leachate may still require an additional bio-denitrification step, which occurs with the addition of often-expensive chemicals as carbon source. This study explores the applicability of low-cost carbon sources such as garden refuse compost and pine bark for the denitrification of high strength landfill leachates. The overall objective is to assess efficiency, kinetics and performance of the substrates in the removal of high nitrate concentrations. Garden refuse and pine bark are currently disposed of in general waste landfills in South Africa, separated from the main waste stream. A secondary objective is to assess the feasibility of re-using green waste as by-product of an integrated waste management system. Denitrification processes in fixed bed reactors were simulated at laboratory scale using anaerobic batch tests and leaching columns packed with immature compost and pine bark. Biologically treated leachate from a Sequencing Batch Reactor (SBR) with nitrate concentrations of 350, 700 and 1100 mgN/l were used for the trials. Preliminary results suggest that, passed the acclimatization step (40 days for both substrates), full denitrification is achieved in 10-20 days for the pine bark and 30-40 days for the compost. Copyright 2010 Elsevier B.V. All rights reserved.
Does my step look big in this? A visual illusion leads to safer stepping behaviour.
Elliott, David B; Vale, Anna; Whitaker, David; Buckley, John G
2009-01-01
Tripping is a common factor in falls and a typical safety strategy to avoid tripping on steps or stairs is to increase foot clearance over the step edge. In the present study we asked whether the perceived height of a step could be increased using a visual illusion and whether this would lead to the adoption of a safer stepping strategy, in terms of greater foot clearance over the step edge. The study also addressed the controversial question of whether motor actions are dissociated from visual perception. 21 young, healthy subjects perceived the step to be higher in a configuration of the horizontal-vertical illusion compared to a reverse configuration (p = 0.01). During a simple stepping task, maximum toe elevation changed by an amount corresponding to the size of the visual illusion (p<0.001). Linear regression analyses showed highly significant associations between perceived step height and maximum toe elevation for all conditions. The perceived height of a step can be manipulated using a simple visual illusion, leading to the adoption of a safer stepping strategy in terms of greater foot clearance over a step edge. In addition, the strong link found between perception of a visual illusion and visuomotor action provides additional support to the view that the original, controversial proposal by Goodale and Milner (1992) of two separate and distinct visual streams for perception and visuomotor action should be re-evaluated.
Qualitative identification of permitted and non-permitted colour additives in food products.
Harp, Bhakti Petigara; Miranda-Bermudez, Enio; Baron, Carolina I; Richard, Gerald I
2012-01-01
Colour additives are dyes, pigments or other substances that can impart colour when added or applied to food, drugs, cosmetics, medical devices, or the human body. The substances must be pre-approved by the US Food and Drug Administration (USFDA) and listed in Title 21 of the US Code of Federal Regulations before they may be used in products marketed in the United States. Some also are required to be batch certified by the USFDA prior to their use. Both domestic and imported products sold in interstate commerce fall under USFDA jurisdiction, and the USFDA's district laboratories use a combination of analytical methods for identifying or confirming the presence of potentially violative colour additives. We have developed a qualitative method for identifying 17 certifiable, certification exempt, and non-permitted colour additives in various food products. The method involves extracting the colour additives from a product and isolating them from non-coloured components with a C(18) Sep-Pak cartridge. The colour additives are then separated and identified by liquid chromatography (LC) with photodiode array detection, using an Xterra RP18 column and gradient elution with aqueous ammonium acetate and methanol. Limits of detection (LODs) ranged from 0.02 to 1.49 mg/l. This qualititative LC method supplements the visible spectrophotometric and thin-layer chromatography methods currently used by the USFDA's district laboratories and is less time-consuming and requires less solvent compared to the other methods. The extraction step in the new LC method is a simple and an efficient process that can be used for most food types.
NASA Astrophysics Data System (ADS)
Kluge, S.; Goodwillie, A. M.
2012-12-01
As STEM learning requirements enter the mainstream, there is benefit to providing the tools necessary for students to engage with research-quality geoscience data in a cutting-edge, easy-to-use map-based interface. Funded with an NSF GeoEd award, GeoMapApp Learning Activities ( http://serc.carleton.edu/geomapapp/collection.html ) are being created to help in that endeavour. GeoMapApp Learning Activities offer step-by-step instructions within a guided inquiry approach that enables students to dictate the pace of learning. Based upon GeoMapApp (http://www.geomapapp.org), a free, easy-to-use map-based data exploration and visualisation tool, each activity furnishes the educator with an efficient package of downloadable documents. This includes step-by-step student instructions and answer sheet; an educator's annotated worksheet containing teaching tips, additional content and suggestions for further work; and, quizzes for use before and after the activity to assess learning. Examples of activities so far created involve calculation and analysis of the rate of seafloor spreading; compilation of present-day evidence for huge ancient landslides on the seafloor around the Hawaiian islands; a study of radiometrically-dated volcanic rocks to help understand the concept of hotspots; and, the optimisation of contours as a means to aid visualisation of 3-D data sets on a computer screen. The activities are designed for students at the introductory undergraduate, community college and high school levels, and present a virtual lab-like environment to expose students to content and concepts typically found in those educational settings. The activities can be used in the classroom or out of class, and their guided nature means that the requirement for teacher intervention is reduced thus allowing students to spend more time analysing and understanding geoscience data, content and concepts. Each activity is freely available through the SERC-Carleton web site.
Courtade-Saïdi, Monique; Fleury Feith, Jocelyne
2015-10-01
The pre-analytical step includes sample collection, preparation, transportation and storage in the pathology unit where the diagnosis is performed. The pathologist ensures that pre-analytical conditions are in line with expectations. The lack of standardization for handling cytological samples makes this pre-analytical step difficult to harmonize. Moreover, this step depends on the nature of the sample: fresh liquid or fixed material, air-dried smears, liquid-based cytology. The aim of the study was to review the different practices in French structures of pathology on the pre-analytical phase concerning cytological fluids such as broncho-alveolar lavage (BALF), serous fluids and urine. A survey was conducted on the basis of the pre-analytical chapter of the ISO 15189 and sent to 191 French pathological structures (105 public and 86 private). Fifty-six laboratories replied to the survey. Ninety-five per cent have a computerized management system and 70% a manual on sample handling. The general instructions requested for the patients and sample identification were highly correctly filled with a short time routing and additional tests prescription. By contrast, information are variable concerning the clinical information requested and the type of tubes for collecting fluids and the volumes required as well as the actions taken in case of non-conformity. For the specific items concerning BALF, serous fluids and urine, this survey has shown a great heterogeneity according to sample collection, fixation and of clinical information. This survey demonstrates that the pre-analytical quality for BALF, serous fluids and urine is not optimal and that some corrections of the practices are recommended with a standardization of numerous steps in order to increase the reproducibility of additional tests such as immunocytochemistry, cytogenetic and molecular biology. Some recommendations have been written. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
NASA Technical Reports Server (NTRS)
Spiering, Bruce; Underwood, Lauren; Ellis, Chris; Lehrter, John; Hagy, Jim; Schaeffer, Blake
2010-01-01
The goals of the project are to provide information from satellite remote sensing to support numeric nutrient criteria development and to determine data processing methods and data quality requirements to support nutrient criteria development and implementation. The approach is to identify water quality indicators that are used by decision makers to assess water quality and that are related to optical properties of the water; to develop remotely sensed data products based on algorithms relating remote sensing imagery to field-based observations of indicator values; to develop methods to assess estuarine water quality, including trends, spatial and temporal variability, and seasonality; and to develop tools to assist in the development and implementation of estuarine and coastal nutrient criteria. Additional slides present process, criteria development, typical data sources and analyses for criteria process, the power of remote sensing data for the process, examples from Pensacola Bay, spatial and temporal variability, pixel matchups, remote sensing validation, remote sensing in coastal waters, requirements for remotely sensed data products, and needs assessment. An additional presentation examines group engagement and information collection. Topics include needs assessment purpose and objectives, understanding water quality decision making, determining information requirements, and next steps.
Additional support for the TDK/MABL computer program
NASA Technical Reports Server (NTRS)
Nickerson, G. R.; Dunn, Stuart S.
1993-01-01
An advanced version of the Two-Dimensional Kinetics (TDK) computer program was developed under contract and released to the propulsion community in early 1989. Exposure of the code to this community indicated a need for improvements in certain areas. In particular, the TDK code needed to be adapted to the special requirements imposed by the Space Transportation Main Engine (STME) development program. This engine utilizes injection of the gas generator exhaust into the primary nozzle by means of a set of slots. The subsequent mixing of this secondary stream with the primary stream with finite rate chemical reaction can have a major impact on the engine performance and the thermal protection of the nozzle wall. In attempting to calculate this reacting boundary layer problem, the Mass Addition Boundary Layer (MABL) module of TDK was found to be deficient in several respects. For example, when finite rate chemistry was used to determine gas properties, (MABL-K option) the program run times became excessive because extremely small step sizes were required to maintain numerical stability. A robust solution algorithm was required so that the MABL-K option could be viable as a rocket propulsion industry design tool. Solving this problem was a primary goal of the phase 1 work effort.
Proposed Model for Translational Research at a Teaching-Intensive College of Pharmacy.
Ulrich, Erin; Grady, Sarah; Vonderhaar, Jacqueline; Ruplin, Andrew
2017-08-08
Many American colleges of pharmacy are small, private, teaching institutions. Faculty are required to maintain a research agenda, although the publication quota is less compared with their publicly funded college of pharmacy peers. Faculty at these smaller schools conduct research with very little internal or external funding. This tends to lead to smaller, less impactful research findings. Translational research is becoming popular for research faculty as it bridges theory to practice. The Knowledge-to-Action (KTA) framework presents the steps to conduct translational research. To apply and determine if the KTA framework would be able to produce practice-impactful research at an institution that does not depend on grant funding as part of faculty research agendas. An interdisciplinary team was formed with providers at the clinical faculty's practice site. As the team moved through the KTA steps, authors documented the roles of each team member. It was clear that many different types of teams were formed throughout the KTA process. These teams were then categorized according to the Interdisciplinary Teamwork System. The final result is a proposed model of types of teams and required member roles that are necessary within each KTA step for faculty to conduct practice-impactful research at a small, private, teaching institution without substantial grant funding awards. Applying the KTA framework, two impactful original research manuscripts were developed over two academic years. Furthermore, the practitioners at the clinical faculty member's site were very pleased with the ease of conducting research, as they were never required to take a lead role. In addition, both faculty members alternated lead and support role allowing for a decreased burden of workload while producing theory-driven research. The KTA framework can create a model for translational research and may be particularly beneficial to small teaching institutions to conduct impactful research. Copyright © 2017. Published by Elsevier Inc.
Standardisation of costs: the Dutch Manual for Costing in economic evaluations.
Oostenbrink, Jan B; Koopmanschap, Marc A; Rutten, Frans F H
2002-01-01
The lack of a uniform costing methodology is often considered a weakness of economic evaluations that hinders the interpretation and comparison of studies. Standardisation is therefore an important topic within the methodology of economic evaluations and in national guidelines that formulate the formal requirements for studies to be considered when deciding on the reimbursement of new medical therapies. Recently, the Dutch Manual for Costing: Methods and Standard Costs for Economic Evaluations in Health Care (further referred to as "the manual") has been published, in addition to the Dutch guidelines for pharmacoeconomic research. The objectives of this article are to describe the main content of the manual and to discuss some key issues of the manual in relation to the standardisation of costs. The manual introduces a six-step procedure for costing. These steps concern: the scope of the study;the choice of cost categories;the identification of units;the measurement of resource use;the monetary valuation of units; andthe calculation of unit costs. Each step consists of a number of choices and these together define the approach taken. In addition to a description of the costing process, five key issues regarding the standardisation of costs are distinguished. These are the use of basic principles, methods for measurement and valuation, standard costs (average prices of healthcare services), standard values (values that can be used within unit cost calculations), and the reporting of outcomes. The use of the basic principles, standard values and minimal requirements for reporting outcomes, as defined in the manual, are obligatory in studies that support submissions to acquire reimbursement for new pharmaceuticals. Whether to use standard costs, and the choice of a particular method to measure or value costs, is left mainly to the investigator, depending on the specific study setting. In conclusion, several instruments are available to increase standardisation in costing methodology among studies. These instruments have to be used in such a way that a balance is found between standardisation and the specific setting in which a study is performed. The way in which the Dutch manual tries to reach this balance can serve as an illustration for other countries.
Cooper, A; Converse, C A
1976-07-13
A sensitive technique for the direct calorimetric determination of the energetics of photochemical reactions under low levels of illumination, and its application to the study of primary processes in visula excitation, are described. Enthlpies are reported for various steps in the bleaching of rhodopsin in intact rod outer segment membranes, together with the heats of appropriate model reactions. Protonation changes are also determined calorimetrically by use of buffers with differing heats of proton ionization. Bleaching of rhodopsin is accompanied by significant uptake of heat energy, vastly in excess of the energy required for simple isomerization of the retinal chromophore. Metarhodopsin I formation involves the uptake of about 17 kcal/mol and no net change in proton ionization of the system. Formation of metarhodopsin II requires an additional energy of about 10 kcal/mol and involves the uptake on one hydrogen ion from solution. The energetics of the overall photolysis reaction, rhodopsin leads to opsin + all-trans-retinal, are pH dependent and involve the exposure of an additional titrating group on opsin. This group has a heat of proton ionization of about 12 kcal/mal, characteristic of a primary amine, but a pKa in the region of neutrality. We suggest that this group is the Schiff base lysine of the chromophore binding site of rhodopsin which becomes exposed on photolysis. The low pKa for this active lysine would result in a more stable retinal-opsin linkage, and might be induced by a nearby positively charged group on the protein (either arginine or a second lysine residue). This leads to a model involving intramolecular protonation of the Schiff base nitrogen in the retinal-opsin linkage of rhodopsin, which is consistent with the thermodynamic and spectroscopic properties of the system. We further propose that the metarhodopsin I leads to metarhodopsin II step in the bleaching sequence involves reversible hydrolysis of the Schiff base linkage in the chromophore binding site, and that subsequent steps are the result of migration of the chromophore from this site.
Li, Yunyi; Cundy, Andrew B; Feng, Jingxuan; Fu, Hang; Wang, Xiaojing; Liu, Yangsheng
2017-05-01
Large amounts of chromite ore processing residue (COPR) wastes have been deposited in many countries worldwide, generating significant contamination issues from the highly mobile and toxic hexavalent chromium species (Cr(VI)). In this study, sodium dithionite (Na 2 S 2 O 4 ) was used to reduce Cr(VI) to Cr(III) in COPR containing high available Fe, and then sodium phosphate (Na 3 PO 4 ) was utilized to further immobilize Cr(III), via a two-step procedure (TSP). Remediation and immobilization processes and mechanisms were systematically investigated using batch experiments, sequential extraction studies, X-ray diffraction (XRD) and X-ray Photoelectron Spectroscopy (XPS). Results showed that Na 2 S 2 O 4 effectively reduced Cr(VI) to Cr(III), catalyzed by Fe(III). The subsequent addition of Na 3 PO 4 further immobilized Cr(III) by the formation of crystalline CrPO 4 ·6H 2 O. However, addition of Na 3 PO 4 simultaneously with Na 2 S 2 O 4 (via a one-step procedure, OSP) impeded Cr(VI) reduction due to the competitive reaction of Na 3 PO 4 and Na 2 S 2 O 4 with Fe(III). Thus, the remediation efficiency of the TSP was much higher than the corresponding OSP. Using an optimal dosage in the two-step procedure (Na 2 S 2 O 4 at a dosage of 12× the stoichiometric requirement for 15 days, and then Na 3 PO 4 in a molar ratio (i.e. Na 3 PO 4 : initial Cr(VI)) of 4:1 for another 15 days), the total dissolved Cr in the leachate determined via Toxicity Characteristic Leaching Procedure (TCLP Cr) testing of our samples was reduced to 3.8 mg/L (from an initial TCLP Cr of 112.2 mg/L, i.e. at >96% efficiency). Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Viger, R. J.; Van Beusekom, A. E.
2016-12-01
The treatment of glaciers in modeling requires information about their shape and extent. This presentation discusses new methods and their application in a new glacier-capable variant of the USGS PRMS model, a physically-based, spatially distributed daily time-step model designed to simulate the runoff and evolution of glaciers through time. In addition to developing parameters describing PRMS land surfaces (hydrologic response units, HRUs), several of the analyses and products are likely of interest to cryospheric science community in general. The first method is a (fully automated) variation of logic previously presented in the literature for definition of the glacier centerline. Given that the surface of a glacier might be convex, using traditional topographic analyses based on a DEM to trace a path down the glacier is not reliable. Instead a path is derived based on a cost function. Although only a single path is presented in our results, the method can be easily modified to delineate a branched network of centerlines for each glacier. The second method extends the glacier terminus downslope by an arbitrary distance, according to local surface topography. This product is can be used to explore possible, if unlikely, scenarios under which glacier area grows. More usefully, this method can be used to approximate glacier extents from previous years without needing historical imagery. The final method presents an approach for segmenting the glacier into altitude-based HRUs. Successful integration of this information with traditional approaches for discretizing the non-glacierized portions of a basin requires several additional steps. These include synthesizing the glacier centerline network with one developed with a traditional DEM analysis, ensuring that flow can be routed under and beyond glaciers to a basin outlet. Results are presented based on analysis of the Copper River Basin, Alaska.
A Novel Surface Treatment for Titanium Alloys
NASA Technical Reports Server (NTRS)
Lowther, S. E.; Park, C.; SaintClair, T. L.
2004-01-01
High-speed commercial aircraft require a surface treatment for titanium (Ti) alloy that is both environmentally safe and durable under the conditions of supersonic flight. A number of pretreatment procedures for Ti alloy requiring multi-stages have been developed to produce a stable surface. Among the stages are, degreasing, mechanical abrasion, chemical etching, and electrochemical anodizing. These treatments exhibit significant variations in their long-term stability, and the benefits of each step in these processes still remain unclear. In addition, chromium compounds are often used in many chemical treatments and these materials are detrimental to the environment. Recently, a chromium-free surface treatment for Ti alloy has been reported, though not designed for high temperature applications. In the present study, a simple surface treatment process developed at NASA/LaRC is reported, offering a high performance surface for a variety of applications. This novel surface treatment for Ti alloy is conventionally achieved by forming oxides on the surface with a two-step chemical process without mechanical abrasion. This acid-followed-by-base treatment was designed to be cost effective and relatively safe to use in a commercial application. In addition, it is chromium-free, and has been successfully used with a sol-gel coating to afford a strong adhesive bond after exposure to hot-wet environments. Phenylethynyl containing adhesives were used to evaluate this surface treatment with sol-gel solutions made of novel imide silanes developed at NASA/LaRC. Oxide layers developed by this process were controlled by immersion time and temperature and solution concentration. The morphology and chemical composition of the oxide layers were investigated using scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), and Auger electron spectroscopy (AES). Bond strengths made with this new treatment were evaluated using single lap shear tests.
Implementation of Energy Code Controls Requirements in New Commercial Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael I.; Hart, Philip R.; Hatten, Mike
Most state energy codes in the United States are based on one of two national model codes; ANSI/ASHRAE/IES 90.1 (Standard 90.1) or the International Code Council (ICC) International Energy Conservation Code (IECC). Since 2004, covering the last four cycles of Standard 90.1 updates, about 30% of all new requirements have been related to building controls. These requirements can be difficult to implement and verification is beyond the expertise of most building code officials, yet the assumption in studies that measure the savings from energy codes is that they are implemented and working correctly. The objective of the current research ismore » to evaluate the degree to which high impact controls requirements included in commercial energy codes are properly designed, commissioned and implemented in new buildings. This study also evaluates the degree to which these control requirements are realizing their savings potential. This was done using a three-step process. The first step involved interviewing commissioning agents to get a better understanding of their activities as they relate to energy code required controls measures. The second involved field audits of a sample of commercial buildings to determine whether the code required control measures are being designed, commissioned and correctly implemented and functioning in new buildings. The third step includes compilation and analysis of the information gather during the first two steps. Information gathered during these activities could be valuable to code developers, energy planners, designers, building owners, and building officials.« less
Staack, Roland F; Jordan, Gregor; Heinrich, Julia
2012-02-01
For every drug development program it needs to be discussed whether discrimination between free and total drug concentrations is required to accurately describe its pharmacokinetic behavior. This perspective describes the application of mathematical simulation approaches to guide this initial decision based on available knowledge about target biology, binding kinetics and expected drug concentrations. We provide generic calculations that can be used to estimate the necessity of free drug quantification for different drug molecules. In addition, mathematical approaches are used to simulate various assay conditions in bioanalytical ligand-binding assays: it is demonstrated that due to the noncovalent interaction between the binding partners and typical assay-related interferences in the equilibrium, a correct quantification of the free drug concentration is highly challenging and requires careful design of different assay procedure steps.
Chhibber, Aditya; Upadhyay, Madhur
2016-11-01
Protraction of mandibular posterior teeth requiring absolute anchorage has always been a challenge, especially when the space is located in the anterior region, since more teeth must be protracted. Traditionally, skeletal anchorage devices have been used for anchorage reinforcement during protraction. However, drawbacks such as requirement of a surgical step, inability to tolerate heavy forces, and patient willingness to undergo such surgical procedures can be limiting factors. Additionally, the mechanics involved can sometimes create undesirable side effects, thereby limiting their application in such situations. This report describes the use of a fixed functional appliance as an anchorage-reinforcement device for en-masse protraction of mandibular posterior teeth into a missing lateral incisor space. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carter, William G.; Rios, Orlando; U
ORNL worked with Grid Logic Inc to demonstrate micro induction sintering (MIS) and binder decomposition of steel powders. It was shown that MIS effectively emits spatially confined electromagnetic energy that is directly coupled to metallic powders resulting in resistive heating of individual particles. The non-uniformity of particle morphology and distribution of the water atomized steel powders resulted in inefficient transfer of energy. It was shown that adhering the particles together using polymer binders resulted in more efficient coupling. Using the MIS processes, debinding and sintering could be done in a single step. When combined with another system, such as binder-jet,more » this could reduce the amount of required post-processing. An invention disclosure was filed on hybrid systems that use MIS to reduce the amount of required post-processing.« less
Reinventing Image Detective: An Evidence-Based Approach to Citizen Science Online
NASA Astrophysics Data System (ADS)
Romano, C.; Graff, P. V.; Runco, S.
2017-12-01
Usability studies demonstrate that web users are notoriously impatient, spending as little as 15 seconds on a home page. How do you get users to stay long enough to understand a citizen science project? How do you get users to complete complex citizen science tasks online?Image Detective, a citizen science project originally developed by scientists and science engagement specialists at the NASA Johnson Space center to engage the public in the analysis of images taken from space by astronauts to help enhance NASA's online database of astronaut imagery, partnered with the CosmoQuest citizen science platform to modernize, offering new and improved options for participation in Image Detective. The challenge: to create a web interface that builds users' skills and knowledge, creating engagement while learning complex concepts essential to the accurate completion of tasks. The project team turned to usability testing for an objective understanding of how users perceived Image Detective and the steps required to complete required tasks. A group of six users was recruited online for unmoderated and initial testing. The users followed a think-aloud protocol while attempting tasks, and were recorded on video and audio. The usability test examined users' perception of four broad areas: the purpose of and context for Image Detective; the steps required to successfully complete the analysis (differentiating images of Earth's surface from those showing outer space and identifying common surface features); locating the image center point on a map of Earth; and finally, naming geographic locations or natural events seen in the image.Usability test findings demonstrated that the following best practices can increase participation in Image Detective and can be applied to the successful implementation of any citizen science project:• Concise explanation of the project, its context, and its purpose;• Including a mention of the funding agency (in this case, NASA);• A preview of the specific tasks required of participants;• A dedicated user interface for the actual citizen science interaction.In addition, testing revealed that users may require additional context when a task is complex, difficult, or unusual (locating a specific image and its center point on a map of Earth). Video evidence will be made available with this presentation.
Reinventing Image Detective: An Evidence-Based Approach to Citizen Science Online
NASA Technical Reports Server (NTRS)
Romano, Cia; Graff, Paige V.; Runco, Susan
2017-01-01
Usability studies demonstrate that web users are notoriously impatient, spending as little as 15 seconds on a home page. How do you get users to stay long enough to understand a citizen science project? How do you get users to complete complex citizen science tasks online? Image Detective, a citizen science project originally developed by scientists and science engagement specialists at the NASA Johnson Space center to engage the public in the analysis of images taken from space by astronauts to help enhance NASA's online database of astronaut imagery, partnered with the CosmoQuest citizen science platform to modernize, offering new and improved options for participation in Image Detective. The challenge: to create a web interface that builds users' skills and knowledge, creating engagement while learning complex concepts essential to the accurate completion of tasks. The project team turned to usability testing for an objective understanding of how users perceived Image Detective and the steps required to complete required tasks. A group of six users was recruited online for unmoderated and initial testing. The users followed a think-aloud protocol while attempting tasks, and were recorded on video and audio. The usability test examined users' perception of four broad areas: the purpose of and context for Image Detective; the steps required to successfully complete the analysis (differentiating images of Earth's surface from those showing outer space and identifying common surface features); locating the image center point on a map of Earth; and finally, naming geographic locations or natural events seen in the image. Usability test findings demonstrated that the following best practices can increase participation in Image Detective and can be applied to the successful implementation of any citizen science project: (1) Concise explanation of the project, its context, and its purpose; (2) Including a mention of the funding agency (in this case, NASA); (3) A preview of the specific tasks required of participants; (4) A dedicated user interface for the actual citizen science interaction. In addition, testing revealed that users may require additional context when a task is complex, difficult, or unusual (locating a specific image and its center point on a map of Earth). Video evidence will be made available with this presentation.
Proton pump inhibitor step-down therapy for GERD: A multi-center study in Japan
Tsuzuki, Takao; Okada, Hiroyuki; Kawahara, Yoshiro; Takenaka, Ryuta; Nasu, Junichiro; Ishioka, Hidehiko; Fujiwara, Akiko; Yoshinaga, Fumiya; Yamamoto, Kazuhide
2011-01-01
AIM: To investigate the predictors of success in step-down of proton pump inhibitor and to assess the quality of life (QOL). METHODS: Patients who had heartburn twice a week or more were treated with 20 mg omeprazole (OPZ) once daily for 8 wk as an initial therapy (study 1). Patients whose heartburn decreased to once a week or less at the end of the initial therapy were enrolled in study 2 and treated with 10 mg OPZ as maintenance therapy for an additional 6 mo (study 2). QOL was investigated using the gastrointestinal symptom rating scale (GSRS) before initial therapy, after both 4 and 8 wk of initial therapy, and at 1, 2, 3, and 6 mo after starting maintenance therapy. RESULTS: In study 1, 108 patients were analyzed. Their characteristics were as follows; median age: 63 (range: 20-88) years, sex: 46 women and 62 men. The success rate of the initial therapy was 76%. In the patients with successful initial therapy, abdominal pain, indigestion and reflux GSRS scores were improved. In study 2, 83 patients were analyzed. Seventy of 83 patients completed the study 2 protocol. In the per-protocol analysis, 80% of 70 patients were successful for step-down. On multivariate analysis of baseline demographic data and clinical information, no previous treatment for gastroesophageal reflux disease (GERD) [odds ratio (OR) 0.255, 95% CI: 0.06-0.98] and a lower indigestion score in GSRS at the beginning of step-down therapy (OR 0.214, 95% CI: 0.06-0.73) were found to be the predictors of successful step-down therapy. The improved GSRS scores by initial therapy were maintained through the step-down therapy. CONCLUSION: OPZ was effective for most GERD patients. However, those who have had previous treatment for GERD and experience dyspepsia before step-down require particular monitoring for relapse. PMID:21472108
Physical modeling of vortical cross-step flow in the American paddlefish, Polyodon spathula
Brooks, Hannah; Haines, Grant E.; Lin, M. Carly
2018-01-01
Vortical cross-step filtration in suspension-feeding fish has been reported recently as a novel mechanism, distinct from other biological and industrial filtration processes. Although crossflow passing over backward-facing steps generates vortices that can suspend, concentrate, and transport particles, the morphological factors affecting this vortical flow have not been identified previously. In our 3D-printed models of the oral cavity for ram suspension-feeding fish, the angle of the backward-facing step with respect to the model’s dorsal midline affected vortex parameters significantly, including rotational, tangential, and axial speed. These vortices were comparable to those quantified downstream of the backward-facing steps that were formed by the branchial arches of preserved American paddlefish in a recirculating flow tank. Our data indicate that vortices in cross-step filtration have the characteristics of forced vortices, as the flow of water inside the oral cavity provides the external torque required to sustain forced vortices. Additionally, we quantified a new variable for ram suspension feeding termed the fluid exit ratio. This is defined as the ratio of the total open pore area for water leaving the oral cavity via spaces between branchial arches that are not blocked by gill rakers, divided by the total area for water entering through the gape during ram suspension feeding. Our experiments demonstrated that the fluid exit ratio in preserved paddlefish was a significant predictor of the flow speeds that were quantified anterior of the rostrum, at the gape, directly dorsal of the first ceratobranchial, and in the forced vortex generated by the first ceratobranchial. Physical modeling of vortical cross-step filtration offers future opportunities to explore the complex interactions between structural features of the oral cavity, vortex parameters, motile particle behavior, and particle morphology that determine the suspension, concentration, and transport of particles within the oral cavity of ram suspension-feeding fish. PMID:29561890
USDA-ARS?s Scientific Manuscript database
Researchers from Hohai University in Nanjing, China compared stepped chute research conducted in physical models of narrow stepped chutes to research conducted by scientists at the USDA-ARS Hydraulic Engineering Research Unit (HERU) in a physical model of a wide stepped chute. Researchers from Hoha...
40 CFR 60.1120 - What steps must I complete for my siting analysis?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What steps must I complete for my... Requirements: Siting Analysis § 60.1120 What steps must I complete for my siting analysis? (a) For your siting analysis, you must complete five steps: (1) Prepare an analysis. (2) Make your analysis available to the...
Stepped chute training wall height requirements
USDA-ARS?s Scientific Manuscript database
Stepped chutes are commonly used for overtopping protection for embankment dams. Aerated flow is commonly associated with stepped chutes if the chute has sufficient length. The aeration and turbulence of the flow can create a significant amount of splash over the training wall if not appropriately...
29 CFR 1952.384 - Completed developmental steps.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 9 2010-07-01 2010-07-01 false Completed developmental steps. 1952.384 Section 1952.384 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION....384 Completed developmental steps. (a) In accordance with the requirements of § 1952.10, Puerto Rico's...
NASA Technical Reports Server (NTRS)
Chan, Daniel C.; Darian, Armen; Sindir, Munir
1992-01-01
We have applied and compared the efficiency and accuracy of two commonly used numerical methods for the solution of Navier-Stokes equations. The artificial compressibility method augments the continuity equation with a transient pressure term and allows one to solve the modified equations as a coupled system. Due to its implicit nature, one can have the luxury of taking a large temporal integration step at the expense of higher memory requirement and larger operation counts per step. Meanwhile, the fractional step method splits the Navier-Stokes equations into a sequence of differential operators and integrates them in multiple steps. The memory requirement and operation count per time step are low, however, the restriction on the size of time marching step is more severe. To explore the strengths and weaknesses of these two methods, we used them for the computation of a two-dimensional driven cavity flow with Reynolds number of 100 and 1000, respectively. Three grid sizes, 41 x 41, 81 x 81, and 161 x 161 were used. The computations were considered after the L2-norm of the change of the dependent variables in two consecutive time steps has fallen below 10(exp -5).
Ugarte, Ana; Corbacho, David; Aymerich, María S; García-Osta, Ana; Cuadrado-Tejedor, Mar; Oyarzabal, Julen
2018-04-19
Drug efficacy in the central nervous system (CNS) requires an additional step after crossing the blood-brain barrier. Therapeutic agents must reach their targets in the brain to modulate them; thus, the free drug concentration hypothesis is a key parameter for in vivo pharmacology. Here, we report the impact of neurodegeneration (Alzheimer's disease (AD) and Parkinson's disease (PD) compared with healthy controls) on the binding of 10 known drugs to postmortem brain tissues from animal models and humans. Unbound drug fractions, for some drugs, are significantly different between healthy and injured brain tissues (AD or PD). In addition, drugs binding to brain tissues from AD and PD animal models do not always recapitulate their binding to the corresponding human injured brain tissues. These results reveal potentially relevant implications for CNS drug discovery.
Direct, enantioselective α-alkylation of aldehydes using simple olefins.
Capacci, Andrew G; Malinowski, Justin T; McAlpine, Neil J; Kuhne, Jerome; MacMillan, David W C
2017-11-01
Although the α-alkylation of ketones has already been established, the analogous reaction using aldehyde substrates has proven surprisingly elusive. Despite the structural similarities between the two classes of compounds, the sensitivity and unique reactivity of the aldehyde functionality has typically required activated substrates or specialized additives. Here, we show that the synergistic merger of three catalytic processes-photoredox, enamine and hydrogen-atom transfer (HAT) catalysis-enables an enantioselective α-aldehyde alkylation reaction that employs simple olefins as coupling partners. Chiral imidazolidinones or prolinols, in combination with a thiophenol, iridium photoredox catalyst and visible light, have been successfully used in a triple catalytic process that is temporally sequenced to deliver a new hydrogen and electron-borrowing mechanism. This multicatalytic process enables both intra- and intermolecular aldehyde α-methylene coupling with olefins to construct both cyclic and acyclic products, respectively. With respect to atom and step-economy ideals, this stereoselective process allows the production of high-value molecules from feedstock chemicals in one step while consuming only photons.
Photoassociation of ultracold LiRb molecules with short pulses near a Feshbach resonance
NASA Astrophysics Data System (ADS)
Gacesa, Marko; Ghosal, Subhas; Byrd, Jason; Côté, Robin
2014-05-01
Ultracold diatomic molecules prepared in the lowest ro-vibrational state are a required first step in many experimental studies aimed at investigating the properties of cold quantum matter. We propose a novel approach to produce such molecules in a two-color photoassociation experiment with short pulses performed near a Feshbach resonance. Specifically, we report the results of a theoretical investigation of formation of 6Li87Rb molecules in a magnetic field. We show that the molecular formation rate can be significantly increased if the pump step is performed near a magnetic Feshbach resonance due to the strong coupling between the energetically open and closed hyperfine states. In addition, the dependence of the nodal structure of the total wave function on the magnetic field allows for enhanced control over the shape and position of the wave packet. The proposed approach is applicable to different systems that have accessible Feshbach resonances. Partially supported by ARO(MG), DOE(SG), AFOFR(JB), NSF(RC).
Unveiling the Biometric Potential of Finger-Based ECG Signals
Lourenço, André; Silva, Hugo; Fred, Ana
2011-01-01
The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications. PMID:21837235
NASA Astrophysics Data System (ADS)
Théry, V.; Boulle, A.; Crunteanu, A.; Orlianges, J. C.; Beaumont, A.; Mayet, R.; Mennai, A.; Cosset, F.; Bessaudou, A.; Fabert, M.
2017-02-01
Large area (up to 4 squared inches) epitaxial VO2 films, with a uniform thickness and exhibiting an abrupt metal-insulator transition with a resistivity ratio as high as 2.85 × 10 4 , have been grown on (001)-oriented sapphire substrates by electron beam evaporation. The lattice distortions (mosaicity) and the level of strain in the films have been assessed by X-ray diffraction. It is demonstrated that the films grow in a domain-matching mode where the distortions are confined close to the interface which allows growth of high-quality materials despite the high film-substrate lattice mismatch. It is further shown that a post-deposition high-temperature oxygen annealing step is crucial to ensure the correct film stoichiometry and provide the best structural and electrical properties. Alternatively, it is possible to obtain high quality films with a RF discharge during deposition, which hence do not require the additional annealing step. Such films exhibit similar electrical properties and only slightly degraded structural properties.
Adaptive [theta]-methods for pricing American options
NASA Astrophysics Data System (ADS)
Khaliq, Abdul Q. M.; Voss, David A.; Kazmi, Kamran
2008-12-01
We develop adaptive [theta]-methods for solving the Black-Scholes PDE for American options. By adding a small, continuous term, the Black-Scholes PDE becomes an advection-diffusion-reaction equation on a fixed spatial domain. Standard implementation of [theta]-methods would require a Newton-type iterative procedure at each time step thereby increasing the computational complexity of the methods. Our linearly implicit approach avoids such complications. We establish a general framework under which [theta]-methods satisfy a discrete version of the positivity constraint characteristic of American options, and numerically demonstrate the sensitivity of the constraint. The positivity results are established for the single-asset and independent two-asset models. In addition, we have incorporated and analyzed an adaptive time-step control strategy to increase the computational efficiency. Numerical experiments are presented for one- and two-asset American options, using adaptive exponential splitting for two-asset problems. The approach is compared with an iterative solution of the two-asset problem in terms of computational efficiency.
Halligan, Brian D.; Geiger, Joey F.; Vallejos, Andrew K.; Greene, Andrew S.; Twigger, Simon N.
2009-01-01
One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step by step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center website (http://proteomics.mcw.edu/vipdac). PMID:19358578
Single step production of Cas9 mRNA for zygote injection.
Redel, Bethany K; Beaton, Benjamin P; Spate, Lee D; Benne, Joshua A; Murphy, Stephanie L; O'Gorman, Chad W; Spate, Anna M; Prather, Randall S; Wells, Kevin D
2018-03-01
Production of Cas9 mRNA in vitro typically requires the addition of a 5´ cap and 3´ polyadenylation. A plasmid was constructed that harbored the T7 promoter followed by the EMCV IRES and a Cas9 coding region. We hypothesized that the use of the metastasis associated lung adenocarcinoma transcript 1 (Malat1) triplex structure downstream of an IRES/Cas9 expression cassette would make polyadenylation of in vitro produced mRNA unnecessary. A sequence from the mMalat1 gene was cloned downstream of the IRES/Cas9 cassette described above. An mRNA concentration curve was constructed with either commercially available Cas9 mRNA or the IRES/ Cas9/triplex, by injection into porcine zygotes. Blastocysts were genotyped to determine if differences existed in the percent of embryos modified. The concentration curve identified differences due to concentration and RNA type injected. Single step production of Cas9 mRNA provides an alternative source of Cas9 for use in zygote injections.
Materials for DEMO and reactor applications—boundary conditions and new concepts
NASA Astrophysics Data System (ADS)
Coenen, J. W.; Antusch, S.; Aumann, M.; Biel, W.; Du, J.; Engels, J.; Heuer, S.; Houben, A.; Hoeschen, T.; Jasper, B.; Koch, F.; Linke, J.; Litnovsky, A.; Mao, Y.; Neu, R.; Pintsuk, G.; Riesch, J.; Rasinski, M.; Reiser, J.; Rieth, M.; Terra, A.; Unterberg, B.; Weber, Th; Wegener, T.; You, J.-H.; Linsmeier, Ch
2016-02-01
DEMO is the name for the first stage prototype fusion reactor considered to be the next step after ITER towards realizing fusion. For the realization of fusion energy especially, materials questions pose a significant challenge already today. Heat, particle and neutron loads are a significant problem to material lifetime when extrapolating to DEMO. For many of the issues faced, advanced materials solutions are under discussion or already under development. In particular, components such as the first wall and the divertor of the reactor can benefit from introducing new approaches such as composites or new alloys into the discussion. Cracking, oxidation as well as fuel management are driving issues when deciding for new materials. Here {{{W}}}{{f}}/{{W}} composites as well as strengthened CuCrZr components together with oxidation resilient tungsten alloys allow the step towards a fusion reactor. In addition, neutron induced effects such as transmutation, embrittlement and after-heat and activation are essential. Therefore, when designing a component an approach taking into account all aspects is required.
Prediction-Correction Algorithms for Time-Varying Constrained Optimization
Simonetto, Andrea; Dall'Anese, Emiliano
2017-07-26
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
Single step vacuum-free and hydrogen-free synthesis of graphene
NASA Astrophysics Data System (ADS)
Orellana, Christian; Cunha, Thiago; Fantini, Cristiano; Jaques, Alonso; Häberle, Patricio
2017-08-01
We report a modified method to grow graphene in a single-step process. It is based on chemical vapor deposition and considers the use of methane under extremely adverse synthesis conditions, namely in an open chamber without requiring the addition of gaseous hydrogen in any of the synthesis stages. The synthesis occurs between two parallel Cu plates, heated up via electromagnetic induction. The inductive heating yields a strong thermal gradient between the catalytic substrates and the surrounding environment, promoting the enrichment of hydrogen generated as fragments of the methane molecules within the volume confined by the Cu foils. This induced density gradient is due to thermo-diffusion, also known as the Soret effect. Hydrogen and other low mass molecular fractions produced during the process inhibit oxidative effects and simultaneously reduce the native oxide on the Cu surface. As a result, high quality graphene is obtained on the inner surfaces of the Cu sheets as confirmed by Raman spectroscopy.
NASA Astrophysics Data System (ADS)
Krotkus, Simonas; Nehm, Frederik; Janneck, Robby; Kalkura, Shrujan; Zakhidov, Alex A.; Schober, Matthias; Hild, Olaf R.; Kasemann, Daniel; Hofmann, Simone; Leo, Karl; Reineke, Sebastian
2015-03-01
Recently, bilayer resist processing combined with development in hydrofluoroether (HFE) solvents has been shown to enable single color structuring of vacuum-deposited state-of-the-art organic light-emitting diodes (OLED). In this work, we focus on further steps required to achieve multicolor structuring of p-i-n OLEDs using a bilayer resist approach. We show that the green phosphorescent OLED stack is undamaged after lift-off in HFEs, which is a necessary step in order to achieve RGB pixel array structured by means of photolithography. Furthermore, we investigate the influence of both, double resist processing on red OLEDs and exposure of the devices to ambient conditions, on the basis of the electrical, optical and lifetime parameters of the devices. Additionally, water vapor transmission rates of single and bilayer system are evaluated with thin Ca film conductance test. We conclude that diffusion of propylene glycol methyl ether acetate (PGMEA) through the fluoropolymer film is the main mechanism behind OLED degradation observed after bilayer processing.
Optimization Issues with Complex Rotorcraft Comprehensive Analysis
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.; Tarzanin, Frank J.; Hirsh, Joel E.; Young, Darrell K.
1998-01-01
This paper investigates the use of the general purpose automatic differentiation (AD) tool called Automatic Differentiation of FORTRAN (ADIFOR) as a means of generating sensitivity derivatives for use in Boeing Helicopter's proprietary comprehensive rotor analysis code (VII). ADIFOR transforms an existing computer program into a new program that performs a sensitivity analysis in addition to the original analysis. In this study both the pros (exact derivatives, no step-size problems) and cons (more CPU, more memory) of ADIFOR are discussed. The size (based on the number of lines) of the VII code after ADIFOR processing increased by 70 percent and resulted in substantial computer memory requirements at execution. The ADIFOR derivatives took about 75 percent longer to compute than the finite-difference derivatives. However, the ADIFOR derivatives are exact and are not functions of step-size. The VII sensitivity derivatives generated by ADIFOR are compared with finite-difference derivatives. The ADIFOR and finite-difference derivatives are used in three optimization schemes to solve a low vibration rotor design problem.
Yousuf, Abu; Khan, Maksudur Rahman; Islam, M Amirul; Wahid, Zularisam Ab; Pirozzi, Domenico
2017-01-01
Microbial oils are considered as alternative to vegetable oils or animal fats as biodiesel feedstock. Microalgae and oleaginous yeast are the main candidates of microbial oil producers' community. However, biodiesel synthesis from these sources is associated with high cost and process complexity. The traditional transesterification method includes several steps such as biomass drying, cell disruption, oil extraction and solvent recovery. Therefore, direct transesterification or in situ transesterification, which combines all the steps in a single reactor, has been suggested to make the process cost effective. Nevertheless, the process is not applicable for large-scale biodiesel production having some difficulties such as high water content of biomass that makes the reaction rate slower and hurdles of cell disruption makes the efficiency of oil extraction lower. Additionally, it requires high heating energy in the solvent extraction and recovery stage. To resolve these difficulties, this review suggests the application of antimicrobial peptides and high electric fields to foster the microbial cell wall disruption.
Investigation of test methods, material properties and processes for solar cell encapsulants
NASA Technical Reports Server (NTRS)
Willis, P. B.; Baum, B.
1983-01-01
The goal of the program is to identify, test, evaluate and recommend encapsulation materials and processes for the fabrication of cost-effective and long life solar modules. Of the $18 (1948 $) per square meter allocated for the encapsulation components approximately 50% of the cost ($9/sq m) may be taken by the load bearing component. Due to the proportionally high cost of this element, lower costing materials were investigated. Wood based products were found to be the lowest costing structural materials for module construction, however, they require protection from rainwater and humidity in order to acquire dimensional stability. The cost of a wood product based substrate must, therefore, include raw material costs plus the cost of additional processing to impart hygroscopic inertness. This protection is provided by a two step, or split process in which a flexible laminate containing the cell string is prepared, first in a vacuum process and then adhesively attached with a back cover film to the hardboard in a subsequent step.
Polymerization model for hydrogen peroxide initiated synthesis of polypyrrole nanoparticles.
Leonavicius, Karolis; Ramanaviciene, Almira; Ramanavicius, Arunas
2011-09-06
A very simple, environmentally friendly, one-step oxidative polymerization route to fabricate polypyrrole (Ppy) nanoparticles of fixed size and morphology was developed and investigated. The herein proposed method is based on the application of sodium dodecyl sulfate and hydrogen peroxide, both easily degradable and cheap materials. The polymerization reaction is performed on 24 h time scale under standard conditions. We monitored a polaronic peak at 465 nm and estimated nanoparticle concentration during various stages of the reaction. Using this data we proposed a mechanism for Ppy nanoparticle formation in accordance with earlier emulsion polymerization mechanisms. Rates of various steps in the polymerization mechanism were accounted for and the resulting particles identified using atomic force microscopy. Application of Ppy nanoparticles prepared by the route presented here seems very promising for biomedical applications where biocompatibility is paramount. In addition, this kind of synthesis could be suitable for the development of solar cells, where very pure and low-cost conducting polymers are required. © 2011 American Chemical Society
Unveiling the biometric potential of finger-based ECG signals.
Lourenço, André; Silva, Hugo; Fred, Ana
2011-01-01
The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications.
Halligan, Brian D; Geiger, Joey F; Vallejos, Andrew K; Greene, Andrew S; Twigger, Simon N
2009-06-01
One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step-by-step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center Web site ( http://proteomics.mcw.edu/vipdac ).
Sequence-Based Prediction of RNA-Binding Residues in Proteins.
Walia, Rasna R; El-Manzalawy, Yasser; Honavar, Vasant G; Dobbs, Drena
2017-01-01
Identifying individual residues in the interfaces of protein-RNA complexes is important for understanding the molecular determinants of protein-RNA recognition and has many potential applications. Recent technical advances have led to several high-throughput experimental methods for identifying partners in protein-RNA complexes, but determining RNA-binding residues in proteins is still expensive and time-consuming. This chapter focuses on available computational methods for identifying which amino acids in an RNA-binding protein participate directly in contacting RNA. Step-by-step protocols for using three different web-based servers to predict RNA-binding residues are described. In addition, currently available web servers and software tools for predicting RNA-binding sites, as well as databases that contain valuable information about known protein-RNA complexes, RNA-binding motifs in proteins, and protein-binding recognition sites in RNA are provided. We emphasize sequence-based methods that can reliably identify interfacial residues without the requirement for structural information regarding either the RNA-binding protein or its RNA partner.
Sequence-Based Prediction of RNA-Binding Residues in Proteins
Walia, Rasna R.; EL-Manzalawy, Yasser; Honavar, Vasant G.; Dobbs, Drena
2017-01-01
Identifying individual residues in the interfaces of protein–RNA complexes is important for understanding the molecular determinants of protein–RNA recognition and has many potential applications. Recent technical advances have led to several high-throughput experimental methods for identifying partners in protein–RNA complexes, but determining RNA-binding residues in proteins is still expensive and time-consuming. This chapter focuses on available computational methods for identifying which amino acids in an RNA-binding protein participate directly in contacting RNA. Step-by-step protocols for using three different web-based servers to predict RNA-binding residues are described. In addition, currently available web servers and software tools for predicting RNA-binding sites, as well as databases that contain valuable information about known protein–RNA complexes, RNA-binding motifs in proteins, and protein-binding recognition sites in RNA are provided. We emphasize sequence-based methods that can reliably identify interfacial residues without the requirement for structural information regarding either the RNA-binding protein or its RNA partner. PMID:27787829
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonetto, Andrea; Dall'Anese, Emiliano
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
Direct, enantioselective α-alkylation of aldehydes using simple olefins
Capacci, Andrew G.; Malinowski, Justin T.; McAlpine, Neil J.; Kuhne, Jerome; MacMillan, David W. C.
2017-01-01
Although the α-alkylation of ketones has already been established, the analogous reaction using aldehyde substrates has proven surprisingly elusive. Despite the structural similarities between the two classes of compounds, the sensitivity and unique reactivity of the aldehyde functionality has typically required activated substrates or specialized additives. Here, we show that the synergistic merger of three catalytic processes—photoredox, enamine and hydrogen-atom transfer (HAT) catalysis—enables an enantioselective α-aldehyde alkylation reaction that employs simple olefins as coupling partners. Chiral imidazolidinones or prolinols, in combination with a thiophenol, iridium photoredox catalyst and visible light, have been successfully used in a triple catalytic process that is temporally sequenced to deliver a new hydrogen and electron-borrowing mechanism. This multicatalytic process enables both intra- and intermolecular aldehyde α-methylene coupling with olefins to construct both cyclic and acyclic products, respectively. With respect to atom and step-economy ideals, this stereoselective process allows the production of high-value molecules from feedstock chemicals in one step while consuming only photons. PMID:29064486
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sagnella, Sharon M.; Gong, Xiaojuan; Moghaddam, Minoo J.
2014-09-24
We demonstrate that oral delivery of self-assembled nanostructured nanoparticles consisting of 5-fluorouracil (5-FU) lipid prodrugs results in a highly effective, target-activated, chemotherapeutic agent, and offers significantly enhanced efficacy over a commercially available alternative that does not self-assemble. The lipid prodrug nanoparticles have been found to significantly slow the growth of a highly aggressive mouse 4T1 breast tumour, and essentially halt the growth of a human MDA-MB-231 breast tumour in mouse xenografts. Systemic toxicity is avoided as prodrug activation requires a three-step, enzymatic conversion to 5-FU, with the third step occurring preferentially at the tumour site. Additionally, differences in the lipidmore » prodrug chemical structure and internal nanostructure of the nanoparticle dictate the enzymatic conversion rate and can be used to control sustained release profiles. Thus, we have developed novel oral nanomedicines that combine sustained release properties with target-selective activation.« less
Direct, enantioselective α-alkylation of aldehydes using simple olefins
NASA Astrophysics Data System (ADS)
Capacci, Andrew G.; Malinowski, Justin T.; McAlpine, Neil J.; Kuhne, Jerome; MacMillan, David W. C.
2017-11-01
Although the α-alkylation of ketones has already been established, the analogous reaction using aldehyde substrates has proven surprisingly elusive. Despite the structural similarities between the two classes of compounds, the sensitivity and unique reactivity of the aldehyde functionality has typically required activated substrates or specialized additives. Here, we show that the synergistic merger of three catalytic processes—photoredox, enamine and hydrogen-atom transfer (HAT) catalysis—enables an enantioselective α-aldehyde alkylation reaction that employs simple olefins as coupling partners. Chiral imidazolidinones or prolinols, in combination with a thiophenol, iridium photoredox catalyst and visible light, have been successfully used in a triple catalytic process that is temporally sequenced to deliver a new hydrogen and electron-borrowing mechanism. This multicatalytic process enables both intra- and intermolecular aldehyde α-methylene coupling with olefins to construct both cyclic and acyclic products, respectively. With respect to atom and step-economy ideals, this stereoselective process allows the production of high-value molecules from feedstock chemicals in one step while consuming only photons.
Step-reduced synthesis of starch-silver nanoparticles.
Raghavendra, Gownolla Malegowd; Jung, Jeyoung; Kim, Dowan; Seo, Jongchul
2016-05-01
In the present process, silver nanoparticles were directly synthesized in a single step by microwave irradiation of a mixture of starch, silver nitrate, and deionized water. This is different from the commonly adopted procedure for starch-silver nanoparticle synthesis in which silver nanoparticles are synthesized by preparing a starch solution as a reaction medium first. Thus, the additional step associated with the preparation of the starch solution was eliminated. In addition, no additional reducing agent was utilized. The adopted method was facile and straight forward, affording spherical silver nanoparticles with diameter below 10nm that exhibited good antibacterial activity. Further, influence of starch on the size of the silver nanoparticles was noticed. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Vivek, Tiwary; Arunkumar, P.; Deshpande, A. S.; Vinayak, Malik; Kulkarni, R. M.; Asif, Angadi
2018-04-01
Conventional investment casting is one of the oldest and most economical manufacturing techniques to produce intricate and complex part geometries. However, investment casting is considered economical only if the volume of production is large. Design iterations and design optimisations in this technique proves to be very costly due to time and tooling cost for making dies for producing wax patterns. However, with the advent of Additive manufacturing technology, plastic patterns promise a very good potential to replace the wax patterns. This approach can be very useful for low volume production & lab requirements, since the cost and time required to incorporate the changes in the design is very low. This research paper discusses the steps involved for developing polymer nanocomposite filaments and checking its suitability for investment castings. The process parameters of the 3D printer machine are also optimized using the DOE technique to obtain mechanically stronger plastic patterns. The study is done to develop a framework for rapid investment casting for lab as well as industrial requirements.
Guide for developing an information technology investment road map for population health management.
Hunt, Jacquelyn S; Gibson, Richard F; Whittington, John; Powell, Kitty; Wozney, Brad; Knudson, Susan
2015-06-01
Many health systems recovering from a massive investment in electronic health records are now faced with the prospect of maturing into accountable care organizations. This maturation includes the need to cooperate with new partners, involve substantially new data sources, require investment in additional information technology (IT) solutions, and become proficient in managing care from a new perspective. Adding to the confusion, there are hundreds of population health management (PHM) vendors with overlapping product functions. This article proposes an organized approach to investing in PHM IT. The steps include assessing the organization's business and clinical goals, establishing governance, agreeing on business requirements, evaluating the ability of current IT systems to meet those requirements, setting time lines and budgets, rationalizing current and future needs and capabilities, and installing the new systems in the context of a continuously learning organization. This article will help organizations chart their position on the population health readiness spectrum and enhance their chances for a successful transition from volume-based to value-based care.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, E.C.; Killough, S.M.; Rowe, J.C.
The purpose of the Smart Crane Ammunition Transfer System (SCATS) project is to demonstrate robotic/telerobotic controls technology for a mobile articulated crane for missile/munitions handling, delivery, and reload. Missile resupply and reload have been manually intensive operations up to this time. Currently, reload missiles are delivered by truck to the site of the launcher. A crew of four to five personnel reloads the missiles from the truck to the launcher using a hydraulic-powered crane. The missiles are handled carefully for the safety of the missiles and personnel. Numerous steps are required in the reload process and the entire reload operationmore » can take over an hour for some missile systems. Recent US Army directives require the entire operation to be accomplished in a fraction of that time. Current development of SCATS is being based primarily on reloading Patriot missiles. This paper summarizes the current status of the SCATS project at the Oak Ridge National Laboratory (ORNL). Additional information on project background and requirements has been described previously (Bradley, et al., 1995).« less
Laguesse, Sophie; Close, Pierre; Van Hees, Laura; Chariot, Alain; Malgrange, Brigitte; Nguyen, Laurent
2017-01-01
The Elongator complex is required for proper development of the cerebral cortex. Interfering with its activity in vivo delays the migration of postmitotic projection neurons, at least through a defective α-tubulin acetylation. However, this complex is already expressed by cortical progenitors where it may regulate the early steps of migration by targeting additional proteins. Here we report that connexin-43 (Cx43), which is strongly expressed by cortical progenitors and whose depletion impairs projection neuron migration, requires Elongator expression for its proper acetylation. Indeed, we show that Cx43 acetylation is reduced in the cortex of Elp3cKO embryos, as well as in a neuroblastoma cell line depleted of Elp1 expression, suggesting that Cx43 acetylation requires Elongator in different cellular contexts. Moreover, we show that histones deacetylase 6 (HDAC6) is a deacetylase of Cx43. Finally, we report that acetylation of Cx43 regulates its membrane distribution in apical progenitors of the cerebral cortex. PMID:28507509
Design and Verification of a Distributed Communication Protocol
NASA Technical Reports Server (NTRS)
Munoz, Cesar A.; Goodloe, Alwyn E.
2009-01-01
The safety of remotely operated vehicles depends on the correctness of the distributed protocol that facilitates the communication between the vehicle and the operator. A failure in this communication can result in catastrophic loss of the vehicle. To complicate matters, the communication system may be required to satisfy several, possibly conflicting, requirements. The design of protocols is typically an informal process based on successive iterations of a prototype implementation. Yet distributed protocols are notoriously difficult to get correct using such informal techniques. We present a formal specification of the design of a distributed protocol intended for use in a remotely operated vehicle, which is built from the composition of several simpler protocols. We demonstrate proof strategies that allow us to prove properties of each component protocol individually while ensuring that the property is preserved in the composition forming the entire system. Given that designs are likely to evolve as additional requirements emerge, we show how we have automated most of the repetitive proof steps to enable verification of rapidly changing designs.
Estimated splash and training wall height requirements for stepped chutes applied to embankment dams
USDA-ARS?s Scientific Manuscript database
Aging embankment dams are commonly plagued with insufficient spillway capacity. To provide increased spillway capacity, stepped chutes are frequently applied as an overtopping protection system for embankment dams. Stepped chutes with sufficient length develops aerated flow. The aeration and flow...
ERIC Educational Resources Information Center
Peters, Erin
2005-01-01
Deconstructing cookbook labs to require the students to be more thoughtful could break down perceived teacher barriers to inquiry learning. Simple steps that remove or disrupt the direct transfer of step-by-step procedures in cookbook labs make students think more critically about their process. Through trials in the author's middle school…
Understanding Tribofilm Formation Mechanisms in Ionic Liquid Lubrication
Zhou, Yan; Leonard, Donovan N.; Guo, Wei; ...
2017-08-16
Ionic liquids (ILs) have recently been developed as a novel class of lubricant anti-wear (AW) additives, but the formation mechanism of their wear protective tribofilms is not yet well understood. Unlike the conventional metal-containing AW additives that self-react to grow a tribofilm, the metal-free ILs require a supplier of metal cations in the tribofilm growth. The two apparent sources of metal cations are the contact surface and the wear debris, and the latter contains important ‘historical’ interface information but often is overlooked. We correlated the morphological and compositional characteristics of tribofilms and wear debris from an IL-lubricated steel–steel contact. Inmore » conclusion, a complete multi-step formation mechanism is proposed for the tribofilm of metal-free AW additives, including direct tribochemical reactions between the metallic contact surface with oxygen to form an oxide interlayer, wear debris generation and breakdown, tribofilm growth via mechanical deposition, chemical deposition, and oxygen diffusion.« less
Understanding Tribofilm Formation Mechanisms in Ionic Liquid Lubrication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yan; Leonard, Donovan N.; Guo, Wei
Ionic liquids (ILs) have recently been developed as a novel class of lubricant anti-wear (AW) additives, but the formation mechanism of their wear protective tribofilms is not yet well understood. Unlike the conventional metal-containing AW additives that self-react to grow a tribofilm, the metal-free ILs require a supplier of metal cations in the tribofilm growth. The two apparent sources of metal cations are the contact surface and the wear debris, and the latter contains important ‘historical’ interface information but often is overlooked. We correlated the morphological and compositional characteristics of tribofilms and wear debris from an IL-lubricated steel–steel contact. Inmore » conclusion, a complete multi-step formation mechanism is proposed for the tribofilm of metal-free AW additives, including direct tribochemical reactions between the metallic contact surface with oxygen to form an oxide interlayer, wear debris generation and breakdown, tribofilm growth via mechanical deposition, chemical deposition, and oxygen diffusion.« less
2010-01-01
Background Numerous pen devices are available to administer recombinant Human Growth Hormone (rhGH), and both patients and health plans have varying issues to consider when selecting a particular product and device for daily use. Therefore, the present study utilized multi-dimensional product analysis to assess potential time involvement, required weekly administration steps, and utilization costs relative to daily rhGH administration. Methods Study objectives were to conduct 1) Time-and-Motion (TM) simulations in a randomized block design that allowed time and steps comparisons related to rhGH preparation, administration and storage, and 2) a Cost Minimization Analysis (CMA) relative to opportunity and supply costs. Nurses naïve to rhGH administration and devices were recruited to evaluate four rhGH pen devices (2 in liquid form, 2 requiring reconstitution) via TM simulations. Five videotaped and timed trials for each product were evaluated based on: 1) Learning (initial use instructions), 2) Preparation (arrange device for use), 3) Administration (actual simulation manikin injection), and 4) Storage (maintain product viability between doses), in addition to assessment of steps required for weekly use. The CMA applied micro-costing techniques related to opportunity costs for caregivers (categorized as wages), non-drug medical supplies, and drug product costs. Results Norditropin® NordiFlex and Norditropin® NordiPen (NNF and NNP, Novo Nordisk, Inc., Bagsværd, Denmark) took less weekly Total Time (p < 0.05) to use than either of the comparator products, Genotropin® Pen (GTP, Pfizer, Inc, New York, New York) or HumatroPen® (HTP, Eli Lilly and Company, Indianapolis, Indiana). Time savings were directly related to differences in new package Preparation times (NNF (1.35 minutes), NNP (2.48 minutes) GTP (4.11 minutes), HTP (8.64 minutes), p < 0.05)). Administration and Storage times were not statistically different. NNF (15.8 minutes) and NNP (16.2 minutes) also took less time to Learn than HTP (24.0 minutes) and GTP (26.0 minutes), p < 0.05). The number of weekly required administration steps was also least with NNF and NNP. Opportunity cost savings were greater in devices that were easier to prepare for use; GTP represented an 11.8% drug product savings over NNF, NNP and HTP at time of study. Overall supply costs represented <1% of drug costs for all devices. Conclusions Time-and-motion simulation data used to support a micro-cost analysis demonstrated that the pen device with the greater time demand has highest net costs. PMID:20377905
Nickman, Nancy A; Haak, Sandra W; Kim, Jaewhan
2010-04-08
Numerous pen devices are available to administer recombinant Human Growth Hormone (rhGH), and both patients and health plans have varying issues to consider when selecting a particular product and device for daily use. Therefore, the present study utilized multi-dimensional product analysis to assess potential time involvement, required weekly administration steps, and utilization costs relative to daily rhGH administration. Study objectives were to conduct 1) Time-and-Motion (TM) simulations in a randomized block design that allowed time and steps comparisons related to rhGH preparation, administration and storage, and 2) a Cost Minimization Analysis (CMA) relative to opportunity and supply costs. Nurses naïve to rhGH administration and devices were recruited to evaluate four rhGH pen devices (2 in liquid form, 2 requiring reconstitution) via TM simulations. Five videotaped and timed trials for each product were evaluated based on: 1) Learning (initial use instructions), 2) Preparation (arrange device for use), 3) Administration (actual simulation manikin injection), and 4) Storage (maintain product viability between doses), in addition to assessment of steps required for weekly use. The CMA applied micro-costing techniques related to opportunity costs for caregivers (categorized as wages), non-drug medical supplies, and drug product costs. Norditropin(R) NordiFlex and Norditropin(R) NordiPen (NNF and NNP, Novo Nordisk, Inc., Bagsvaerd, Denmark) took less weekly Total Time (p < 0.05) to use than either of the comparator products, Genotropin(R) Pen (GTP, Pfizer, Inc, New York, New York) or HumatroPen(R) (HTP, Eli Lilly and Company, Indianapolis, Indiana). Time savings were directly related to differences in new package Preparation times (NNF (1.35 minutes), NNP (2.48 minutes) GTP (4.11 minutes), HTP (8.64 minutes), p < 0.05)). Administration and Storage times were not statistically different. NNF (15.8 minutes) and NNP (16.2 minutes) also took less time to Learn than HTP (24.0 minutes) and GTP (26.0 minutes), p < 0.05). The number of weekly required administration steps was also least with NNF and NNP. Opportunity cost savings were greater in devices that were easier to prepare for use; GTP represented an 11.8% drug product savings over NNF, NNP and HTP at time of study. Overall supply costs represented <1% of drug costs for all devices. Time-and-motion simulation data used to support a micro-cost analysis demonstrated that the pen device with the greater time demand has highest net costs.
Thaler, Florian; Valsasina, Barbara; Baldi, Rosario; Xie, Jin; Stewart, Albert; Isacchi, Antonella; Kalisz, Henryk M; Rusconi, Luisa
2003-06-01
beta-Elimination of the phosphate group on phosphoserine and phosphothreonine residues and addition of an alkyldithiol is a useful tool for analysis of the phosphorylation states of proteins and peptides. We have explored the influence of several conditions on the efficiency of this PO(4)(3-) elimination reaction upon addition of propanedithiol. In addition to the described influence of different bases, the solvent composition was also found to have a major effect on the yield of the reaction. In particular, an increase in the percentage of DMSO enhances the conversion rate, whereas a higher amount of protic polar solvents, such as water or isopropanol, induces the opposite effect. We have also developed a protocol for enrichment of the modified peptides, which is based on solid-phase covalent capture/release with a dithiopyridino-resin. The procedure for beta-elimination and isolation of phosphorylated peptides by solid-phase capture/release was developed with commercially available alpha-casein. Enriched peptide fragments were characterized by MALDI-TOF mass spectrometric analysis before and after alkylation with iodoacetamide, which allowed rapid confirmation of the purposely introduced thiol moiety. Sensitivity studies, carried out in order to determine the detection limit, demonstrated that samples could be detected even in the low picomolar range by mass spectrometry. The developed solid-phase enrichment procedure based on reversible covalent binding of the modified peptides is more effective and significantly simpler than methods based on the interaction between biotin and avidin, which require additional steps such as tagging the modified peptides and work-up of the samples prior to the affinity capture step.
Process for removing thorium and recovering vanadium from titanium chlorinator waste
Olsen, Richard S.; Banks, John T.
1996-01-01
A process for removal of thorium from titanium chlorinator waste comprising: (a) leaching an anhydrous titanium chlorinator waste in water or dilute hydrochloric acid solution and filtering to separate insoluble minerals and coke fractions from soluble metal chlorides; (b) beneficiating the insoluble fractions from step (a) on shaking tables to recover recyclable or otherwise useful TiO.sub.2 minerals and coke; and (c) treating filtrate from step (a) with reagents to precipitate and remove thorium by filtration along with acid metals of Ti, Zr, Nb, and Ta by the addition of the filtrate (a), a base and a precipitant to a boiling slurry of reaction products (d); treating filtrate from step (c) with reagents to precipitate and recover an iron vanadate product by the addition of the filtrate (c), a base and an oxidizing agent to a boiling slurry of reaction products; and (e) treating filtrate from step (d) to remove any remaining cations except Na by addition of Na.sub.2 CO.sub.3 and boiling.
Automating the evaluation of flood damages: methodology and potential gains
NASA Astrophysics Data System (ADS)
Eleutério, Julian; Martinez, Edgar Daniel
2010-05-01
The evaluation of flood damage potential consists of three main steps: assessing and processing data, combining data and calculating potential damages. The first step consists of modelling hazard and assessing vulnerability. In general, this step of the evaluation demands more time and investments than the others. The second step of the evaluation consists of combining spatial data on hazard with spatial data on vulnerability. Geographic Information System (GIS) is a fundamental tool in the realization of this step. GIS software allows the simultaneous analysis of spatial and matrix data. The third step of the evaluation consists of calculating potential damages by means of damage-functions or contingent analysis. All steps demand time and expertise. However, the last two steps must be realized several times when comparing different management scenarios. In addition, uncertainty analysis and sensitivity test are made during the second and third steps of the evaluation. The feasibility of these steps could be relevant in the choice of the extent of the evaluation. Low feasibility could lead to choosing not to evaluate uncertainty or to limit the number of scenario comparisons. Several computer models have been developed over time in order to evaluate the flood risk. GIS software is largely used to realise flood risk analysis. The software is used to combine and process different types of data, and to visualise the risk and the evaluation results. The main advantages of using a GIS in these analyses are: the possibility of "easily" realising the analyses several times, in order to compare different scenarios and study uncertainty; the generation of datasets which could be used any time in future to support territorial decision making; the possibility of adding information over time to update the dataset and make other analyses. However, these analyses require personnel specialisation and time. The use of GIS software to evaluate the flood risk requires personnel with a double professional specialisation. The professional should be proficient in GIS software and in flood damage analysis (which is already a multidisciplinary field). Great effort is necessary in order to correctly evaluate flood damages, and the updating and the improvement of the evaluation over time become a difficult task. The automation of this process should bring great advance in flood management studies over time, especially for public utilities. This study has two specific objectives: (1) show the entire process of automation of the second and third steps of flood damage evaluations; and (2) analyse the induced potential gains in terms of time and expertise needed in the analysis. A programming language is used within GIS software in order to automate hazard and vulnerability data combination and potential damages calculation. We discuss the overall process of flood damage evaluation. The main result of this study is a computational tool which allows significant operational gains on flood loss analyses. We quantify these gains by means of a hypothetical example. The tool significantly reduces the time of analysis and the needs for expertise. An indirect gain is that sensitivity and cost-benefit analyses can be more easily realized.
Zidan, Ragaiy; Teprovich, Joseph A.; Motyka, Theodore
2015-12-01
A system for the generation of hydrogen for use in portable power systems is set forth utilizing a two-step process that involves the thermal decomposition of AlH.sub.3 (10 wt % H.sub.2) followed by the hydrolysis of the activated aluminum (Al*) byproduct to release additional H.sub.2. Additionally, a process in which water is added directly without prior history to the AlH.sub.3:PA composite is also disclosed.
Lakhtakia, Sundeep; Basha, Jahangeer; Talukdar, Rupjyoti; Gupta, Rajesh; Nabi, Zaheer; Ramchandani, Mohan; Kumar, B V N; Pal, Partha; Kalpala, Rakesh; Reddy, P Manohar; Pradeep, R; Singh, Jagadish R; Rao, G V; Reddy, D Nageshwar
2017-06-01
EUS-guided drainage using plastic stents may be inadequate for treatment of walled-off necrosis (WON). Recent studies report variable outcomes even when using covered metal stents. The aim of this study was to evaluate the efficacy of a dedicated covered biflanged metal stent (BFMS) when adopting an endoscopic "step-up approach" for drainage of symptomatic WON. We retrospectively evaluated consecutive patients with symptomatic WON who underwent EUS-guided drainage using BFMSs over a 3-year period. Reassessment was done between 48 and 72 hours for resolution. Endoscopic reinterventions were tailored in nonresponders in a stepwise manner. Step 1 encompassed declogging the blocked lumen of the BFMS. In step 2, a nasocystic tube was placed via BFMSs with intermittent irrigation. Step 3 involved direct endoscopic necrosectomy (DEN). BFMSs were removed between 4 and 8 weeks of follow-up. The main outcome measures were technical success, clinical success, adverse events, and need for DEN. Two hundred five WON patients underwent EUS-guided drainage using BFMSs. Technical success was achieved in 203 patients (99%). Periprocedure adverse events occurred in 8 patients (bleeding in 6, perforation in 2). Clinical success with BFMSs alone was seen in 153 patients (74.6%). Reintervention adopting the step-up approach was required in 49 patients (23.9%). Incremental success was achieved in 10 patients with step 1, 16 patients with step 2, and 19 patients with step 3. Overall clinical success was achieved in 198 patients (96.5%), with DEN required in 9.2%. Four patients failed treatment and required surgery (2) or percutaneous drainage (2). The endoscopic step-up approach using BFMSs was safe, effective, and yielded successful outcomes in most patients, reducing the need for DEN. Copyright © 2017 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.
Steps in the open space planning process
Stephanie B. Kelly; Melissa M. Ryan
1995-01-01
This paper presents the steps involved in developing an open space plan. The steps are generic in that the methods may be applied various size communities. The intent is to provide a framework to develop an open space plan that meets Massachusetts requirements for funding of open space acquisition.
40 CFR 35.925-1 - Facilities planning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Facilities planning. 35.925-1 Section... Facilities planning. That, if the award is for step 2, step 3, or step 2=3 grant assistance, the facilities planning requirements in § 35.917 et seq. have been met. ...
GeoMapApp Learning Activities: Enabling the democratisation of geoscience learning
NASA Astrophysics Data System (ADS)
Goodwillie, A. M.; Kluge, S.
2011-12-01
GeoMapApp Learning Activities (http://serc.carleton.edu/geomapapp) are step-by-step guided inquiry geoscience education activities that enable students to dictate the pace of learning. They can be used in the classroom or out of class, and their guided nature means that the requirement for teacher intervention is minimised which allows students to spend increased time analysing and understanding a broad range of geoscience data, content and concepts. Based upon GeoMapApp (http://www.geomapapp.org), a free, easy-to-use map-based data exploration and visualisation tool, each activity furnishes the educator with an efficient package of downloadable documents. This includes step-by-step student instructions and answer sheet; a teacher's edition annotated worksheet containing teaching tips, additional content and suggestions for further work; quizzes for use before and after the activity to assess learning; and a multimedia tutorial. The activities can be used by anyone at any time in any place with an internet connection. In essence, GeoMapApp Learning Activities provide students with cutting-edge technology, research-quality geoscience data sets, and inquiry-based learning in a virtual lab-like environment. Examples of activities so far created are student calculation and analysis of the rate of seafloor spreading, and present-day evidence on the seafloor for huge ancient landslides around the Hawaiian islands. The activities are designed primarily for students at the community college, high school and introductory undergraduate levels, exposing students to content and concepts typically found in those settings.
Hollow Microtube Resonators via Silicon Self-Assembly toward Subattogram Mass Sensing Applications.
Kim, Joohyun; Song, Jungki; Kim, Kwangseok; Kim, Seokbeom; Song, Jihwan; Kim, Namsu; Khan, M Faheem; Zhang, Linan; Sader, John E; Park, Keunhan; Kim, Dongchoul; Thundat, Thomas; Lee, Jungchul
2016-03-09
Fluidic resonators with integrated microchannels (hollow resonators) are attractive for mass, density, and volume measurements of single micro/nanoparticles and cells, yet their widespread use is limited by the complexity of their fabrication. Here we report a simple and cost-effective approach for fabricating hollow microtube resonators. A prestructured silicon wafer is annealed at high temperature under a controlled atmosphere to form self-assembled buried cavities. The interiors of these cavities are oxidized to produce thin oxide tubes, following which the surrounding silicon material is selectively etched away to suspend the oxide tubes. This simple three-step process easily produces hollow microtube resonators. We report another innovation in the capping glass wafer where we integrate fluidic access channels and getter materials along with residual gas suction channels. Combined together, only five photolithographic steps and one bonding step are required to fabricate vacuum-packaged hollow microtube resonators that exhibit quality factors as high as ∼ 13,000. We take one step further to explore additionally attractive features including the ability to tune the device responsivity, changing the resonator material, and scaling down the resonator size. The resonator wall thickness of ∼ 120 nm and the channel hydraulic diameter of ∼ 60 nm are demonstrated solely by conventional microfabrication approaches. The unique characteristics of this new fabrication process facilitate the widespread use of hollow microtube resonators, their translation between diverse research fields, and the production of commercially viable devices.
Step-wise refolding of recombinant proteins.
Tsumoto, Kouhei; Arakawa, Tsutomu; Chen, Linda
2010-04-01
Protein refolding is still on trial-and-error basis. Here we describe step-wise dialysis refolding, in which denaturant concentration is altered in step-wise fashion. This technology controls the folding pathway by adjusting the concentrations of the denaturant and other solvent additives to induce sequential folding or disulfide formation.
Variable-mesh method of solving differential equations
NASA Technical Reports Server (NTRS)
Van Wyk, R.
1969-01-01
Multistep predictor-corrector method for numerical solution of ordinary differential equations retains high local accuracy and convergence properties. In addition, the method was developed in a form conducive to the generation of effective criteria for the selection of subsequent step sizes in step-by-step solution of differential equations.
Communication as a Strategic Activity (Invited)
NASA Astrophysics Data System (ADS)
Fischhoff, B.
2010-12-01
Effective communication requires preparation. The first step is explicit analysis of the decisions faced by audience members, in order to identify the facts essential to their choices. The second step is assessing their current beliefs, in order to identify the gaps in their understanding, as well as their natural ways of thinking. The third step is drafting communications potentially capable of closing those gaps, taking advantage of the relevant behavioral science. The fourth step is empirically evaluating those communications, refining them as necessary. The final step is communicating through trusted channels, capable of getting the message out and receiving needed feedback. Executing these steps requires a team involving subject matter experts (for ensuring that the science is right), decision analysts (for identifying the decision-critical facts), behavioral scientists (for designing and evaluating messages), and communication specialists (for creating credible channels). Larger organizations should be able to assemble those teams and anticipate their communication needs. However, even small organizations, individuals, or large organizations that have been caught flat-footed can benefit from quickly assembling informal teams, before communicating in ways that might undermine their credibility. The work is not expensive, but does require viewing communication as a strategic activity, rather than an afterthought. The talk will illustrate the science base, with a few core research results; note the risks of miscommunication, with a few bad examples; and suggest the opportunities for communication leadership, focusing on the US Food and Drug Administration.
Design and validation of instruments to measure knowledge.
Elliott, T E; Regal, R R; Elliott, B A; Renier, C M
2001-01-01
Measuring health care providers' learning after they have participated in educational interventions that use experimental designs requires valid, reliable, and practical instruments. A literature review was conducted. In addition, experience gained from designing and validating instruments for measuring the effect of an educational intervention informed this process. The eight main steps for designing, validating, and testing the reliability of instruments for measuring learning outcomes are presented. The key considerations and rationale for this process are discussed. Methods for critiquing and adapting existent instruments and creating new ones are offered. This study may help other investigators in developing valid, reliable, and practical instruments for measuring the outcomes of educational activities.
Ligand Binding: Molecular Mechanics Calculation of the Streptavidin-Biotin Rupture Force
NASA Astrophysics Data System (ADS)
Grubmuller, Helmut; Heymann, Berthold; Tavan, Paul
1996-02-01
The force required to rupture the streptavidin-biotin complex was calculated here by computer simulations. The computed force agrees well with that obtained by recent single molecule atomic force microscope experiments. These simulations suggest a detailed multiple-pathway rupture mechanism involving five major unbinding steps. Binding forces and specificity are attributed to a hydrogen bond network between the biotin ligand and residues within the binding pocket of streptavidin. During rupture, additional water bridges substantially enhance the stability of the complex and even dominate the binding inter-actions. In contrast, steric restraints do not appear to contribute to the binding forces, although conformational motions were observed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-06-01
meraculous2 is a whole genome shotgun assembler for short-reads that is capable of assembling large, polymorphic genomes with modest computational requirements. Meraculous relies on an efficient and conservative traversal of the subgraph of the k-mer (deBruijn) graph of oligonucleotides with unique high quality extensions in the dataset, avoiding an explicit error correction step as used in other short-read assemblers. Additional features include (1) handling of allelic variation using "bubble" structures within the deBruijn graph, (2) gap closing of repetitive and low quality regions using localized assemblies, and (3) an improved scaffolding algorithm that produces more complete assemblies without compromising onmore » scaffolding accuracy« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scarpelli, Andrea
Nonlinear integrable optics applied to beam dynamics may mitigate multi-particle instabilities, but proof of principle experiments have never been carried out. The Integrable Optics Test Accelerator (IOTA) is an electron and proton storage ring currently being built at Fermilab, which addresses tests of nonlinear lattice elements in a real machine in addition to experiments on optical stochastic cooling and on the single-electron wave function. These experiments require an outstanding control over the lattice parameters, achievable with fast and precise beam monitoring systems. This work describes the steps for designing and building a beam monitor for IOTA based on synchrotron radiation,more » able to measure intensity, position and transverse cross-section beam.« less
Rep. Honda, Michael M. [D-CA-17
2013-10-29
House - 10/29/2013 Referred to the Committee on Ways and Means, and in addition to the Committee on Rules, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned. (All Actions) Notes: On 2/4/2014, a motion was filed to discharge the Committee on Rules from the consideration of H.Res.459 entitled, a resolution providing for the consideration of the bill (H.R. 3372). A discharge petition requires 218 signatures for further action. (Discharge Petition No. 113-6: text... Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
MacLea, Kyle S.; Paul, Kacy R.; Ben-Musa, Zobaida; Waechter, Aubrey; Shattuck, Jenifer E.; Gruca, Margaret
2014-01-01
Multiple yeast prions have been identified that result from the structural conversion of proteins into a self-propagating amyloid form. Amyloid-based prion activity in yeast requires a series of discrete steps. First, the prion protein must form an amyloid nucleus that can recruit and structurally convert additional soluble proteins. Subsequently, maintenance of the prion during cell division requires fragmentation of these aggregates to create new heritable propagons. For the Saccharomyces cerevisiae prion protein Sup35, these different activities are encoded by different regions of the Sup35 prion domain. An N-terminal glutamine/asparagine-rich nucleation domain is required for nucleation and fiber growth, while an adjacent oligopeptide repeat domain is largely dispensable for prion nucleation and fiber growth but is required for chaperone-dependent prion maintenance. Although prion activity of glutamine/asparagine-rich proteins is predominantly determined by amino acid composition, the nucleation and oligopeptide repeat domains of Sup35 have distinct compositional requirements. Here, we quantitatively define these compositional requirements in vivo. We show that aromatic residues strongly promote both prion formation and chaperone-dependent prion maintenance. In contrast, nonaromatic hydrophobic residues strongly promote prion formation but inhibit prion propagation. These results provide insight into why some aggregation-prone proteins are unable to propagate as prions. PMID:25547291
Chen, Yu; Li, Faqiang; Wurtzel, Eleanore T.
2010-01-01
Metabolic engineering of plant carotenoids in food crops has been a recent focus for improving human health. Pathway manipulation is predicated on comprehensive knowledge of this biosynthetic pathway, which has been extensively studied. However, there existed the possibility of an additional biosynthetic step thought to be dispensable because it could be compensated for by light. This step, mediated by a putative Z-ISO, was predicted to occur in the sequence of redox reactions that are coupled to an electron transport chain and convert the colorless 15-cis-phytoene to the red-colored all-trans-lycopene. The enigma of carotenogenesis in the absence of light (e.g. in endosperm, a target for improving nutritional content) argued for Z-ISO as a pathway requirement. Therefore, understanding of plant carotenoid biosynthesis was obviously incomplete. To prove the existence of Z-ISO, maize (Zea mays) and Arabidopsis (Arabidopsis thaliana) mutants were isolated and the gene identified. Functional testing of the gene product in Escherichia coli showed isomerization of the 15-cis double bond in 9,15,9′-tri-cis-ζ-carotene, proving that Z-ISO encoded the missing step. Z-ISO was found to be important for both light-exposed and “dark” tissues. Comparative genomics illuminated the origin of Z-ISO found throughout higher and lower plants, algae, diatoms, and cyanobacteria. Z-ISO evolved from an ancestor related to the NnrU (for nitrite and nitric oxide reductase U) gene required for bacterial denitrification, a pathway that produces nitrogen oxides as alternate electron acceptors for anaerobic growth. Therefore, plant carotenogenesis evolved by recruitment of genes from noncarotenogenic bacteria. PMID:20335404
Robust double gain unscented Kalman filter for small satellite attitude estimation
NASA Astrophysics Data System (ADS)
Cao, Lu; Yang, Weiwei; Li, Hengnian; Zhang, Zhidong; Shi, Jianjun
2017-08-01
Limited by the low precision of small satellite sensors, the estimation theories with high performance remains the most popular research topic for the attitude estimation. The Kalman filter (KF) and its extensions have been widely applied in the satellite attitude estimation and achieved plenty of achievements. However, most of the existing methods just take use of the current time-step's priori measurement residuals to complete the measurement update and state estimation, which always ignores the extraction and utilization of the previous time-step's posteriori measurement residuals. In addition, the uncertainty model errors always exist in the attitude dynamic system, which also put forward the higher performance requirements for the classical KF in attitude estimation problem. Therefore, the novel robust double gain unscented Kalman filter (RDG-UKF) is presented in this paper to satisfy the above requirements for the small satellite attitude estimation with the low precision sensors. It is assumed that the system state estimation errors can be exhibited in the measurement residual; therefore, the new method is to derive the second Kalman gain Kk2 for making full use of the previous time-step's measurement residual to improve the utilization efficiency of the measurement data. Moreover, the sequence orthogonal principle and unscented transform (UT) strategy are introduced to robust and enhance the performance of the novel Kalman Filter in order to reduce the influence of existing uncertainty model errors. Numerical simulations show that the proposed RDG-UKF is more effective and robustness in dealing with the model errors and low precision sensors for the attitude estimation of small satellite by comparing with the classical unscented Kalman Filter (UKF).
Chen, Michelle B.; Whisler, Jordan A.; Fröse, Julia; Yu, Cathy; Shin, Yoojin
2017-01-01
Distant metastasis, which results in >90% of cancer related deaths, is enabled by hematogenous dissemination of tumor cells via the circulation. This requires the completion of a sequence of complex steps including transit, initial arrest, extravasation, survival and proliferation. Increased understanding of the cellular and molecular players enabling each of these steps is key in uncovering new opportunities for therapeutic intervention during early metastatic dissemination. Here, we describe an in vitro model of the human microcirculation with the potential to recapitulate discrete steps of early metastatic seeding, including arrest, transendothelial migration and early micrometastases formation. The microdevice features self-organized human microvascular networks formed over 4–5 days, after which tumor can be perfused and extravasation events easily tracked over 72 hours, via standard confocal microscopy. Contrary to most in vivo and in vitro extravasation assays, robust and rapid scoring of extravascular cells combined with high-resolution imaging can be easily achieved due to the confinement of the vascular network to one plane close to the surface of the device. This renders extravascular cells clearly distinct and allows tumor cells of interest to be identified quickly compared to those in thick tissues. The ability to generate large numbers of devices (~36) per experiment coupled with fast quantitation further allows for highly parametric studies, which is required when testing multiple genetic or pharmacological perturbations. This is coupled with the capability for live tracking of single cell extravasation events allowing both tumor and endothelial morphological dynamics to be observed in high detail with a moderate number of data points. This Protocol Extension describes an adaptation of an existing Protocol describing a microfluidic platform that offers additional applications. PMID:28358393
Lim, Natalie Y. N.; Roco, Constance A.; Frostegård, Åsa
2016-01-01
Adequate comparisons of DNA and cDNA libraries from complex environments require methods for co-extraction of DNA and RNA due to the inherent heterogeneity of such samples, or risk bias caused by variations in lysis and extraction efficiencies. Still, there are few methods and kits allowing simultaneous extraction of DNA and RNA from the same sample, and the existing ones generally require optimization. The proprietary nature of kit components, however, makes modifications of individual steps in the manufacturer’s recommended procedure difficult. Surprisingly, enzymatic treatments are often performed before purification procedures are complete, which we have identified here as a major problem when seeking efficient genomic DNA removal from RNA extracts. Here, we tested several DNA/RNA co-extraction commercial kits on inhibitor-rich soils, and compared them to a commonly used phenol-chloroform co-extraction method. Since none of the kits/methods co-extracted high-quality nucleic acid material, we optimized the extraction workflow by introducing small but important improvements. In particular, we illustrate the need for extensive purification prior to all enzymatic procedures, with special focus on the DNase digestion step in RNA extraction. These adjustments led to the removal of enzymatic inhibition in RNA extracts and made it possible to reduce genomic DNA to below detectable levels as determined by quantitative PCR. Notably, we confirmed that DNase digestion may not be uniform in replicate extraction reactions, thus the analysis of “representative samples” is insufficient. The modular nature of our workflow protocol allows optimization of individual steps. It also increases focus on additional purification procedures prior to enzymatic processes, in particular DNases, yielding genomic DNA-free RNA extracts suitable for metatranscriptomic analysis. PMID:27803690
A comparison study: image-based vs signal-based retrospective gating on microCT
NASA Astrophysics Data System (ADS)
Liu, Xuan; Salmon, Phil L.; Laperre, Kjell; Sasov, Alexander
2017-09-01
Retrospective gating on animal studies with microCT has gained popularity in recent years. Previously, we use ECG signals for cardiac gating and breathing airflow or video signals of abdominal motion for respiratory gating. This method is adequate and works well for most applications. However, through the years, researchers have noticed some pitfalls in the method. For example, the additional signal acquisition step may increase failure rate in practice. X-Ray image-based gating, on the other hand, does not require any extra step in the scanning. Therefore we investigate imagebased gating techniques. This paper presents a comparison study of the image-based versus signal-based approach to retrospective gating. The two application areas we have studied are respiratory and cardiac imaging for both rats and mice. Image-based respiratory gating on microCT is relatively straightforward and has been done by several other researchers and groups. This method retrieves an intensity curve of a region of interest (ROI) placed in the lung area on all projections. From scans on our systems based on step-and-shoot scanning mode, we confirm that this method is very effective. A detailed comparison between image-based and signal-based gating methods is given. For cardiac gating, breathing motion is not negligible and has to be dealt with. Another difficulty in cardiac gating is the relatively smaller amplitude of cardiac movements comparing to the respirational movements, and the higher heart rate. Higher heart rate requires high speed image acquisition. We have been working on our systems to improve the acquisition speed. A dual gating technique has been developed to achieve adequate cardiac imaging.
Effects of wide step walking on swing phase hip muscle forces and spatio-temporal gait parameters.
Bajelan, Soheil; Nagano, Hanatsu; Sparrow, Tony; Begg, Rezaul K
2017-07-01
Human walking can be viewed essentially as a continuum of anterior balance loss followed by a step that re-stabilizes balance. To secure balance an extended base of support can be assistive but healthy young adults tend to walk with relatively narrower steps compared to vulnerable populations (e.g. older adults and patients). It was, therefore, hypothesized that wide step walking may enhance dynamic balance at the cost of disturbed optimum coupling of muscle functions, leading to additional muscle work and associated reduction of gait economy. Young healthy adults may select relatively narrow steps for a more efficient gait. The current study focused on the effects of wide step walking on hip abductor and adductor muscles and spatio-temporal gait parameters. To this end, lower body kinematic data and ground reaction forces were obtained using an Optotrak motion capture system and AMTI force plates, respectively, while AnyBody software was employed for muscle force simulation. A single step of four healthy young male adults was captured during preferred walking and wide step walking. Based on preferred walking data, two parallel lines were drawn on the walkway to indicate 50% larger step width and participants targeted the lines with their heels as they walked. In addition to step width that defined walking conditions, other spatio-temporal gait parameters including step length, double support time and single support time were obtained. Average hip muscle forces during swing were modeled. Results showed that in wide step walking step length increased, Gluteus Minimus muscles were more active while Gracilis and Adductor Longus revealed considerably reduced forces. In conclusion, greater use of abductors and loss of adductor forces were found in wide step walking. Further validation is needed in future studies involving older adults and other pathological populations.
Razavi, Morteza; Leigh Anderson, N; Pope, Matthew E; Yip, Richard; Pearson, Terry W
2016-09-25
Efficient robotic workflows for trypsin digestion of human plasma and subsequent antibody-mediated peptide enrichment (the SISCAPA method) were developed with the goal of improving assay precision and throughput for multiplexed protein biomarker quantification. First, an 'addition only' tryptic digestion protocol was simplified from classical methods, eliminating the need for sample cleanup, while improving reproducibility, scalability and cost. Second, methods were developed to allow multiplexed enrichment and quantification of peptide surrogates of protein biomarkers representing a very broad range of concentrations and widely different molecular masses in human plasma. The total workflow coefficients of variation (including the 3 sequential steps of digestion, SISCAPA peptide enrichment and mass spectrometric analysis) for 5 proteotypic peptides measured in 6 replicates of each of 6 different samples repeated over 6 days averaged 3.4% within-run and 4.3% across all runs. An experiment to identify sources of variation in the workflow demonstrated that MRM measurement and tryptic digestion steps each had average CVs of ∼2.7%. Because of the high purity of the peptide analytes enriched by antibody capture, the liquid chromatography step is minimized and in some cases eliminated altogether, enabling throughput levels consistent with requirements of large biomarker and clinical studies. Copyright © 2016 Elsevier B.V. All rights reserved.
Effect of ozonation on the removal of cyanobacterial toxins during drinking water treatment.
Hoeger, Stefan J; Dietrich, Daniel R; Hitzfeld, Bettina C
2002-01-01
Water treatment plants faced with toxic cyanobacteria have to be able to remove cyanotoxins from raw water. In this study we investigated the efficacy of ozonation coupled with various filtration steps under different cyanobacterial bloom conditions. Cyanobacteria were ozonated in a laboratory-scale batch reactor modeled on a system used by a modern waterworks, with subsequent activated carbon and sand filtration steps. The presence of cyanobacterial toxins (microcystins) was determined using the protein phosphatase inhibition assay. We found that ozone concentrations of at least 1.5 mg/L were required to provide enough oxidation potential to destroy the toxin present in 5 X 10(5 )Microcystis aeruginosa cells/mL [total organic carbon (TOC), 1.56 mg/L]. High raw water TOC was shown to reduce the efficiency of free toxin oxidation and destruction. In addition, ozonation of raw waters containing high cyanobacteria cell densities will result in cell lysis and liberation of intracellular toxins. Thus, we emphasize that only regular and simultaneous monitoring of TOC/dissolved organic carbon and cyanobacterial cell densities, in conjunction with online residual O(3) concentration determination and efficient filtration steps, can ensure the provision of safe drinking water from surface waters contaminated with toxic cyanobacterial blooms. PMID:12417484
Comparison of DNA extraction methods for human gut microbial community profiling.
Lim, Mi Young; Song, Eun-Ji; Kim, Sang Ho; Lee, Jangwon; Nam, Young-Do
2018-03-01
The human gut harbors a vast range of microbes that have significant impact on health and disease. Therefore, gut microbiome profiling holds promise for use in early diagnosis and precision medicine development. Accurate profiling of the highly complex gut microbiome requires DNA extraction methods that provide sufficient coverage of the original community as well as adequate quality and quantity. We tested nine different DNA extraction methods using three commercial kits (TianLong Stool DNA/RNA Extraction Kit (TS), QIAamp DNA Stool Mini Kit (QS), and QIAamp PowerFecal DNA Kit (QP)) with or without additional bead-beating step using manual or automated methods and compared them in terms of DNA extraction ability from human fecal sample. All methods produced DNA in sufficient concentration and quality for use in sequencing, and the samples were clustered according to the DNA extraction method. Inclusion of bead-beating step especially resulted in higher degrees of microbial diversity and had the greatest effect on gut microbiome composition. Among the samples subjected to bead-beating method, TS kit samples were more similar to QP kit samples than QS kit samples. Our results emphasize the importance of mechanical disruption step for a more comprehensive profiling of the human gut microbiome. Copyright © 2017 The Authors. Published by Elsevier GmbH.. All rights reserved.
Slauson, Stephen R; Pemberton, Ryan; Ghosh, Partha; Tantillo, Dean J; Aubé, Jeffrey
2015-05-15
The development of the domino reaction between an aminoethyl-substituted diene and maleic anhydride to afford an N-substituted octahydroisoquinolin-1-one is described. A typical procedure involves the treatment of a 1-aminoethyl-substituted butadiene with maleic anhydride at 0 °C to room temperature for 20 min under low-solvent conditions, which affords a series of isoquinolinone carboxylic acids in moderate to excellent yields. NMR monitoring suggested that the reaction proceeded via an initial acylation step followed by an intramolecular Diels-Alder reaction. For the latter step, a significant rate difference was observed depending on whether the amino group was substituted by a phenyl or an alkyl (usually benzyl) substituent, with the former noted by NMR to be substantially slower. The Diels-Alder step was studied by density functional theory (DFT) methods, leading to the conclusion that the degree of preorganization in the starting acylated intermediate had the largest effect on the reaction barriers. In addition, the effect of electronics on the aromatic ring in N-phenyl substrates was studied computationally and experimentally. Overall, this protocol proved considerably more amenable to scale up compared to earlier methods by eliminating the requirement of microwave batch chemistry for this reaction as well as significantly reducing the quantity of solvent.
Najafi, Bijan; Miller, Daniel; Jarrett, Beth D; Wrobel, James S
2010-05-01
Many studies have attempted to better elucidate the effect of foot orthoses on gait dynamics. To our knowledge, most previous studies exclude the first few steps of gait and begin analysis at steady state walking. These unanalyzed steps of gait may contain important information about the dynamic and complex processes required to achieve equilibrium for a given gait velocity. The purpose of this study was to quantify gait initiation and determine how many steps were required to reach steady state walking under three footwear conditions: barefoot, habitual shoes, and habitual shoes with a prefabricated foot orthoses. Fifteen healthy subjects walked 50m at habitual speed in each condition. Wearing habitual shoes with the prefabricated orthoses enabled subjects to reach steady state walking in fewer steps (3.5 steps+/-2.0) compared to the barefoot condition (5.2 steps+/-3.0; p=0.02) as well as compared to the habitual shoes condition (4.7 steps+/-1.6; p=0.05). Interestingly, the subjects' dynamic medial-lateral balance was significantly improved (22%, p<0.05) by using foot orthoses compared to other footwear conditions. These findings suggest that foot orthoses may help individuals reach steady state more quickly and with a better dynamic balance in the medial-lateral direction, independent of foot type. The findings of this pilot study may open new avenues for objectively assessing the impact of prescription footwear on dynamic balance and spatio-temporal parameters of gait. Further work to better assess the impact of foot orthoses on gait initiation in patients suffering from gait and instability pathologies may be warranted. Copyright 2010 Elsevier B.V. All rights reserved.
Structural Similitude and Scaling Laws
NASA Technical Reports Server (NTRS)
Simitses, George J.
1998-01-01
Aircraft and spacecraft comprise the class of aerospace structures that require efficiency and wisdom in design, sophistication and accuracy in analysis and numerous and careful experimental evaluations of components and prototype, in order to achieve the necessary system reliability, performance and safety. Preliminary and/or concept design entails the assemblage of system mission requirements, system expected performance and identification of components and their connections as well as of manufacturing and system assembly techniques. This is accomplished through experience based on previous similar designs, and through the possible use of models to simulate the entire system characteristics. Detail design is heavily dependent on information and concepts derived from the previous steps. This information identifies critical design areas which need sophisticated analyses, and design and redesign procedures to achieve the expected component performance. This step may require several independent analysis models, which, in many instances, require component testing. The last step in the design process, before going to production, is the verification of the design. This step necessitates the production of large components and prototypes in order to test component and system analytical predictions and verify strength and performance requirements under the worst loading conditions that the system is expected to encounter in service. Clearly then, full-scale testing is in many cases necessary and always very expensive. In the aircraft industry, in addition to full-scale tests, certification and safety necessitate large component static and dynamic testing. Such tests are extremely difficult, time consuming and definitely absolutely necessary. Clearly, one should not expect that prototype testing will be totally eliminated in the aircraft industry. It is hoped, though, that we can reduce full-scale testing to a minimum. Full-scale large component testing is necessary in other industries as well, Ship building, automobile and railway car construction all rely heavily on testing. Regardless of the application, a scaled-down (by a large factor) model (scale model) which closely represents the structural behavior of the full-scale system (prototype) can prove to be an extremely beneficial tool. This possible development must be based on the existence of certain structural parameters that control the behavior of a structural system when acted upon by static and/or dynamic loads. If such structural parameters exist, a scaled-down replica can be built, which will duplicate the response of the full-scale system. The two systems are then said to be structurally similar. The term, then, that best describes this similarity is structural similitude. Similarity of systems requires that the relevant system parameters be identical and these systems be governed by a unique set of characteristic equations. Thus, if a relation or equation of variables is written for a system, it is valid for all systems which are similar to it. Each variable in a model is proportional to the corresponding variable of the prototype. This ratio, which plays an essential role in predicting the relationship between the model and its prototype, is called the scale factor.