Direct imaging of small scatterers using reduced time dependent data
NASA Astrophysics Data System (ADS)
Cakoni, Fioralba; Rezac, Jacob D.
2017-06-01
We introduce qualitative methods for locating small objects using time dependent acoustic near field waves. These methods have reduced data collection requirements compared to typical qualitative imaging techniques. In particular, we only collect scattered field data in a small region surrounding the location from which an incident field was transmitted. The new methods are partially theoretically justified and numerical simulations demonstrate their efficacy. We show that these reduced data techniques give comparable results to methods which require full multistatic data and that these time dependent methods require less scattered field data than their time harmonic analogs.
22 CFR 401.19 - Reducing or extending time and dispensing with statements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 22 Foreign Relations 2 2012-04-01 2009-04-01 true Reducing or extending time and dispensing with... RULES OF PROCEDURE Applications § 401.19 Reducing or extending time and dispensing with statements. In... Commission may reduce or extend the time for the presentation of any paper or the doing of any act required...
NASA Technical Reports Server (NTRS)
Maynard, V.
1976-01-01
Increased etch rate using 8% citric acid actually reduces total amount of material etched away by eliminating reprocessing that was frequently required. Time required in citrosolve solution is reduced and more protective passive coating is provided.
Limitations of Reliability for Long-Endurance Human Spaceflight
NASA Technical Reports Server (NTRS)
Owens, Andrew C.; de Weck, Olivier L.
2016-01-01
Long-endurance human spaceflight - such as missions to Mars or its moons - will present a never-before-seen maintenance logistics challenge. Crews will be in space for longer and be farther way from Earth than ever before. Resupply and abort options will be heavily constrained, and will have timescales much longer than current and past experience. Spare parts and/or redundant systems will have to be included to reduce risk. However, the high cost of transportation means that this risk reduction must be achieved while also minimizing mass. The concept of increasing system and component reliability is commonly discussed as a means to reduce risk and mass by reducing the probability that components will fail during a mission. While increased reliability can reduce maintenance logistics mass requirements, the rate of mass reduction decreases over time. In addition, reliability growth requires increased test time and cost. This paper assesses trends in test time requirements, cost, and maintenance logistics mass savings as a function of increase in Mean Time Between Failures (MTBF) for some or all of the components in a system. In general, reliability growth results in superlinear growth in test time requirements, exponential growth in cost, and sublinear benefits (in terms of logistics mass saved). These trends indicate that it is unlikely that reliability growth alone will be a cost-effective approach to maintenance logistics mass reduction and risk mitigation for long-endurance missions. This paper discusses these trends as well as other options to reduce logistics mass such as direct reduction of part mass, commonality, or In-Space Manufacturing (ISM). Overall, it is likely that some combination of all available options - including reliability growth - will be required to reduce mass and mitigate risk for future deep space missions.
Ultrasound Picture Archiving And Communication Systems
NASA Astrophysics Data System (ADS)
Koestner, Ken; Hottinger, C. F.
1982-01-01
The ideal ultrasonic image communication and storage system must be flexible in order to optimize speed and minimize storage requirements. Various ultrasonic imaging modalities are quite different in data volume and speed requirements. Static imaging, for example B-Scanning, involves acquisition of a large amount of data that is averaged or accumulated in a desired manner. The image is then frozen in image memory before transfer and storage. Images are commonly a 512 x 512 point array, each point 6 bits deep. Transfer of such an image over a serial line at 9600 baud would require about three minutes. Faster transfer times are possible; for example, we have developed a parallel image transfer system using direct memory access (DMA) that reduces the time to 16 seconds. Data in this format requires 256K bytes for storage. Data compression can be utilized to reduce these requirements. Real-time imaging has much more stringent requirements for speed and storage. The amount of actual data per frame in real-time imaging is reduced due to physical limitations on ultrasound. For example, 100 scan lines (480 points long, 6 bits deep) can be acquired during a frame at a 30 per second rate. In order to transmit and save this data at a real-time rate requires a transfer rate of 8.6 Megabaud. A real-time archiving system would be complicated by the necessity of specialized hardware to interpolate between scan lines and perform desirable greyscale manipulation on recall. Image archiving for cardiology and radiology would require data transfer at this high rate to preserve temporal (cardiology) and spatial (radiology) information.
Next Generation Polar Seismic Instrumentation Challenges
NASA Astrophysics Data System (ADS)
Parker, T.; Beaudoin, B. C.; Gridley, J.; Anderson, K. R.
2011-12-01
Polar region logistics are the limiting factor for deploying deep field seismic arrays. The IRIS PASSCAL Instrument Center, in collaboration with UNAVCO, designed and deployed several systems that address some of the logistical constraints of polar deployments. However, continued logistics' pressures coupled with increasingly ambitious science projects require further reducing the logistics required for deploying both summer and over winter stations. Our focus is to reduce station power requirements and bulk, thereby minimizing the time and effort required to deploy these arrays. We will reduce the weight of the battery bank by incorporating the most applicable new high energy-density battery technology. Using these batteries will require a completely new power management system along with an appropriate smart enclosure. The other aspect will be to integrate the digitizing system with the sensor. Both of these technologies should reduce the install time and shipping volume plus weight while reducing some instrument costs. We will also continue work on an effective Iridium telemetry solution for automated data return. The costs and limitations of polar deep-field science easily justifies a specialized development effort but pays off doubly in that we will continue to leverage the advancements in reduced logistics and increased performance for the benefit of low-latitude seismic research.
NASA Technical Reports Server (NTRS)
Hsia, T. C.; Lu, G. Z.; Han, W. H.
1987-01-01
In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.
Use of fiber reinforced concrete for concrete pavement slab replacement : [summary].
DOT National Transportation Integrated Search
2014-03-01
Replacing cracked concrete in roadways requires : lanes to be closed and traff c disrupted. One way : to reduce road closure time is to reduce concrete : curing time. To accelerate curing time, pavement : engineers mix a very low water-cement ratio w...
Enhanced thermal effect using magnetic nano-particles during high-intensity focused ultrasound.
Devarakonda, Surendra Balaji; Myers, Matthew R; Giridhar, Dushyanth; Dibaji, Seyed Ahmad Reza; Banerjee, Rupak Kumar
2017-01-01
Collateral damage and long sonication times occurring during high-intensity focused ultrasound (HIFU) ablation procedures limit clinical advancement. In this reserarch, we investigated whether the use of magnetic nano-particles (mNPs) can reduce the power required to ablate tissue or, for the same power, reduce the duration of the procedure. Tissue-mimicking phantoms containing embedded thermocouples and physiologically acceptable concentrations (0%, 0.0047%, and 0.047%) of mNPs were sonicated at acoustic powers of 5.2 W, 9.2 W, and 14.5 W, for 30 seconds. Lesion volumes were determined for the phantoms with and without mNPs. It was found that with the 0.047% mNP concentration, the power required to obtain a lesion volume of 13 mm3 can be halved, and the time required to achieve a 21 mm3 lesion decreased by a factor of 5. We conclude that mNPs have the potential to reduce damage to healthy tissue, and reduce the procedure time, during tumor ablation using HIFU.
Lossless data compression for improving the performance of a GPU-based beamformer.
Lok, U-Wai; Fan, Gang-Wei; Li, Pai-Chi
2015-04-01
The powerful parallel computation ability of a graphics processing unit (GPU) makes it feasible to perform dynamic receive beamforming However, a real time GPU-based beamformer requires high data rate to transfer radio-frequency (RF) data from hardware to software memory, as well as from central processing unit (CPU) to GPU memory. There are data compression methods (e.g. Joint Photographic Experts Group (JPEG)) available for the hardware front end to reduce data size, alleviating the data transfer requirement of the hardware interface. Nevertheless, the required decoding time may even be larger than the transmission time of its original data, in turn degrading the overall performance of the GPU-based beamformer. This article proposes and implements a lossless compression-decompression algorithm, which enables in parallel compression and decompression of data. By this means, the data transfer requirement of hardware interface and the transmission time of CPU to GPU data transfers are reduced, without sacrificing image quality. In simulation results, the compression ratio reached around 1.7. The encoder design of our lossless compression approach requires low hardware resources and reasonable latency in a field programmable gate array. In addition, the transmission time of transferring data from CPU to GPU with the parallel decoding process improved by threefold, as compared with transferring original uncompressed data. These results show that our proposed lossless compression plus parallel decoder approach not only mitigate the transmission bandwidth requirement to transfer data from hardware front end to software system but also reduce the transmission time for CPU to GPU data transfer. © The Author(s) 2014.
78 FR 46528 - Surety Bond Guarantee Program
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-01
... proposing to reduce the time frame allowed for a Surety to reimburse or credit SBA for salvage and recovery... the time frame reference required by the Recovery Act, which has expired, and by inserting the...)(1). SBA is proposing to reduce the time frame allowed for a Prior Approval Surety to submit a claim...
77 FR 6685 - Airworthiness Directives; The Boeing Company Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-09
... proposed AD reduces compliance times for Model 767-400ER series airplanes. In addition, this proposed AD...). This proposed AD would reduce the compliance times for Model 767-400ER series airplanes. In addition... airplanes, the existing AD also requires a one- time inspection to determine if a tool runout option has...
77 FR 23420 - Airworthiness Directives; Bombardier, Inc. Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-19
...-2010-24, dated August 3, 2010, does not require replacement of the reducer of the hydraulic system No... of the times specified in paragraph (n)(1) or (n)(2) of this AD: Replace the reducer of the hydraulic... requires revising certain sections of a certain airplane flight manual, deactivating certain hydraulic...
77 FR 72347 - Information Collection Being Reviewed by the Federal Communications Commission
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-05
.... SUMMARY: As part of its continuing effort to reduce paperwork burden and as required by the Paperwork... information technology; and ways to further reduce the information burden for small business concerns with... per Response: 30 sec (.0084 hours). Frequency of Response: One time reporting requirement and third...
Reducing current reversal time in electric motor control
Bredemann, Michael V
2014-11-04
The time required to reverse current flow in an electric motor is reduced by exploiting inductive current that persists in the motor when power is temporarily removed. Energy associated with this inductive current is used to initiate reverse current flow in the motor.
Optimizing a Drone Network to Deliver Automated External Defibrillators.
Boutilier, Justin J; Brooks, Steven C; Janmohamed, Alyf; Byers, Adam; Buick, Jason E; Zhan, Cathy; Schoellig, Angela P; Cheskes, Sheldon; Morrison, Laurie J; Chan, Timothy C Y
2017-06-20
Public access defibrillation programs can improve survival after out-of-hospital cardiac arrest, but automated external defibrillators (AEDs) are rarely available for bystander use at the scene. Drones are an emerging technology that can deliver an AED to the scene of an out-of-hospital cardiac arrest for bystander use. We hypothesize that a drone network designed with the aid of a mathematical model combining both optimization and queuing can reduce the time to AED arrival. We applied our model to 53 702 out-of-hospital cardiac arrests that occurred in the 8 regions of the Toronto Regional RescuNET between January 1, 2006, and December 31, 2014. Our primary analysis quantified the drone network size required to deliver an AED 1, 2, or 3 minutes faster than historical median 911 response times for each region independently. A secondary analysis quantified the reduction in drone resources required if RescuNET was treated as a large coordinated region. The region-specific analysis determined that 81 bases and 100 drones would be required to deliver an AED ahead of median 911 response times by 3 minutes. In the most urban region, the 90th percentile of the AED arrival time was reduced by 6 minutes and 43 seconds relative to historical 911 response times in the region. In the most rural region, the 90th percentile was reduced by 10 minutes and 34 seconds. A single coordinated drone network across all regions required 39.5% fewer bases and 30.0% fewer drones to achieve similar AED delivery times. An optimized drone network designed with the aid of a novel mathematical model can substantially reduce the AED delivery time to an out-of-hospital cardiac arrest event. © 2017 American Heart Association, Inc.
USDA-ARS?s Scientific Manuscript database
Accurate and rapid assays for glucose are desirable for analysis of glucose and starch in food and feedstuffs. An established colorimetric glucose oxidase-peroxidase method for glucose was modified to reduce analysis time, and evaluated for factors that affected accuracy. Time required to perform t...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boggs, Paul T.; Althsuler, Alan; Larzelere, Alex R.
2005-08-01
The Design-through-Analysis Realization Team (DART) is chartered with reducing the time Sandia analysts require to complete the engineering analysis process. The DART system analysis team studied the engineering analysis processes employed by analysts in Centers 9100 and 8700 at Sandia to identify opportunities for reducing overall design-through-analysis process time. The team created and implemented a rigorous analysis methodology based on a generic process flow model parameterized by information obtained from analysts. They also collected data from analysis department managers to quantify the problem type and complexity distribution throughout Sandia's analyst community. They then used this information to develop a communitymore » model, which enables a simple characterization of processes that span the analyst community. The results indicate that equal opportunity for reducing analysis process time is available both by reducing the ''once-through'' time required to complete a process step and by reducing the probability of backward iteration. In addition, reducing the rework fraction (i.e., improving the engineering efficiency of subsequent iterations) offers approximately 40% to 80% of the benefit of reducing the ''once-through'' time or iteration probability, depending upon the process step being considered. Further, the results indicate that geometry manipulation and meshing is the largest portion of an analyst's effort, especially for structural problems, and offers significant opportunity for overall time reduction. Iteration loops initiated late in the process are more costly than others because they increase ''inner loop'' iterations. Identifying and correcting problems as early as possible in the process offers significant opportunity for time savings.« less
Effect of abdominopelvic abscess drain size on drainage time and probability of occlusion
Rotman, Jessica A.; Getrajdman, George I.; Maybody, Majid; Erinjeri, Joseph P.; Yarmohammadi, Hooman; Sofocleous, Constantinos T.; Solomon, Stephen B.; Boas, F. Edward
2016-01-01
Background The purpose of this study is to determine whether larger abdominopelvic abscess drains reduce the time required for abscess resolution, or the probability of tube occlusion. Methods 144 consecutive patients who underwent abscess drainage at a single institution were reviewed retrospectively. Results: Larger initial drain size did not reduce drainage time, drain occlusion, or drain exchanges (p>0.05). Subgroup analysis did not find any type of collection that benefitted from larger drains. A multivariate model predicting drainage time showed that large collections (>200 ml) required 16 days longer drainage time than small collections (<50 ml). Collections with a fistula to bowel required 17 days longer drainage time than collections without a fistula. Initial drain size and the viscosity of the fluid in the collection had no significant effect on drainage time in the multivariate model. Conclusions 8 F drains are adequate for initial drainage of most serous and serosanguineous collections. 10 F drains are adequate for initial drainage of most purulent or bloody collections. PMID:27634422
Mocé, E; Blanch, E; Talaván, A; Viudes de Castro, M P
2014-10-15
Cooling sperm to and equilibrating the sperm at 5 °C require the most time in any sperm cryopreservation protocol. Reducing the time required for these phases would simplify sperm freezing protocols and allow greater number of ejaculates to be processed and frozen in a given time. This study determined how holding rabbit sperm at 5 °C for different lengths of time (0, 10, 15, 20, 30, or 45 minutes) affected the quality of rabbit sperm, measured by in vitro assays, and if reducing the cooling time to only 10 minutes affected the fertilizing ability of the sperm. Reducing the time sperm were held at 5 °C to 10 minutes did not affect the in vitro quality of the sperm (percent motile and with intact plasma membranes), although eliminating the cooling phase completely (directly freezing the sperm from room temperature) decreased in vitro assessed sperm quality (P<0.01). However, reducing the time sperm were held at 5 °C, from 45 to 10 minutes, negatively affected the fertilizing ability of sperm in vivo (P<0.05). In conclusion, completely eliminating cooling rabbit sperm to 5 °C before freezing is detrimental for rabbit sperm cryosurvival, and although shortening the time sperm are held at 5 °C to 10 minutes does not reduce in vitro sperm quality, it does reduce the fertility of rabbit sperm. Therefore, the length of time rabbit sperm equilibrate at 5 °C is crucial to the fertilizing ability of rabbit sperm and must be longer than 10 minutes. Currently, it is not known if holding rabbit sperm at 5 °C for less than 45 minutes will affect sperm fertilizing ability. Copyright © 2014 Elsevier Inc. All rights reserved.
Fast cooldown coaxial pulse tube microcooler
NASA Astrophysics Data System (ADS)
Nast, T.; Olson, J. R.; Champagne, P.; Roth, E.; Kaldas, G.; Saito, E.; Loung, V.; McCay, B. S.; Kenton, A. C.; Dobbins, C. L.
2016-05-01
We report the development and initial testing of the Lockheed Martin first-article, single-stage, compact, coaxial, Fast Cooldown Pulse Tube Microcryocooler (FC-PTM). The new cryocooler supports cooling requirements for emerging large, high operating temperature (105-150K) infrared focal plane array sensors with nominal cooling loads of ~300 mW @105K @293K ambient. This is a sequel development that builds on our inline and coaxial pulse tube microcryocoolers reported at CEC 20137, ICC188,9, and CEC201510. The new FC-PTM and the prior units all share our long life space technology attributes, which typically have 10 year life requirements1. The new prototype microcryocooler builds on the previous development by incorporating cold head design improvements in two key areas: 1) reduced cool-down time and 2) novel repackaging that greatly reduces envelope. The new coldhead and Dewar were significantly redesigned from the earlier versions in order to achieve a cooldown time of 2-3 minutes-- a projected requirement for tactical applications. A design approach was devised to reduce the cold head length from 115mm to 55mm, while at the same time reducing cooldown time. We present new FC-PTM performance test measurements with comparisons to our previous pulse-tube microcryocooler measurements and design predictions. The FC-PTM exhibits attractive small size, volume, weight, power and cost (SWaP-C) features with sufficient cooling capacity over required ambient conditions that apply to an increasing variety of space and tactical applications.
Fast cool-down coaxial pulse tube microcooler
NASA Astrophysics Data System (ADS)
Nast, T.; Olson, J. R.; Champagne, P.; Roth, E.; Kaldas, G.; Saito, E.; Loung, V.; McCay, B. S.; Kenton, A. C.; Dobbins, C. L.
2016-09-01
We report the development and initial testing of the Lockheed Martin first-article, single-stage, compact, coaxial, Fast Cooldown Pulse Tube Microcryocooler (FC-PTM). The new cryocooler supports cooling requirements for emerging large, high operating temperature (105-150K) infrared focal plane array sensors with nominal cooling loads of 300 mW @105K @293K ambient. This is a sequel development that builds on our inline and coaxial pulse tube microcryocoolers reported at CEC 20137, ICC188,9, and CEC201510. The new FC-PTM and the prior units all share our long life space technology attributes, which typically have 10 year life requirements1. The new prototype microcryocooler builds on the previous development by incorporating cold head design improvements in two key areas: 1) reduced cool-down time and 2) novel repackaging that greatly reduces envelope. The new coldhead and Dewar were significantly redesigned from the earlier versions in order to achieve a cooldown time of 2-3 minutes- a projected requirement for tactical applications. A design approach was devised to reduce the cold head length from 115mm to 55mm, while at the same time reducing cooldown time. We present new FC-PTM performance test measurements with comparisons to our previous pulse-tube microcryocooler measurements and design predictions. The FC-PTM exhibits attractive small size, volume, weight, power and cost (SWaP-C) features with sufficient cooling capacity over required ambient conditions that apply to an increasing variety of space and tactical applications.
A collaborative approach to lean laboratory workstation design reduces wasted technologist travel.
Yerian, Lisa M; Seestadt, Joseph A; Gomez, Erron R; Marchant, Kandice K
2012-08-01
Lean methodologies have been applied in many industries to reduce waste. We applied Lean techniques to redesign laboratory workstations with the aim of reducing the number of times employees must leave their workstations to complete their tasks. At baseline in 68 workflows (aggregates or sequence of process steps) studied, 251 (38%) of 664 tasks required workers to walk away from their workstations. After analysis and redesign, only 59 (9%) of the 664 tasks required technologists to leave their workstations to complete these tasks. On average, 3.4 travel events were removed for each workstation. Time studies in a single laboratory section demonstrated that workers spend 8 to 70 seconds in travel each time they step away from the workstation. The redesigned workstations will allow employees to spend less time travelling around the laboratory. Additional benefits include employee training in waste identification, improved overall laboratory layout, and identification of other process improvement opportunities in our laboratory.
Motion Planning of Two Stacker Cranes in a Large-Scale Automated Storage/Retrieval System
NASA Astrophysics Data System (ADS)
Kung, Yiheng; Kobayashi, Yoshimasa; Higashi, Toshimitsu; Ota, Jun
We propose a method for reducing the computational time of motion planning for stacker cranes. Most automated storage/retrieval systems (AS/RSs) are only equipped with one stacker crane. However, this is logistically challenging, and greater work efficiency in warehouses, such as those using two stacker cranes, is required. In this paper, a warehouse with two stacker cranes working simultaneously is proposed. Unlike warehouses with only one crane, trajectory planning in those with two cranes is very difficult. Since there are two cranes working together, a proper trajectory must be considered to avoid collision. However, verifying collisions is complicated and requires a considerable amount of computational time. As transport work in AS/RSs occurs randomly, motion planning cannot be conducted in advance. Planning an appropriate trajectory within a restricted duration would be a difficult task. We thereby address the current problem of motion planning requiring extensive calculation time. As a solution, we propose a “free-step” to simplify the procedure of collision verification and reduce the computational time. On the other hand, we proposed a method to reschedule the order of collision verification in order to find an appropriate trajectory in less time. By the proposed method, we reduce the calculation time to less than 1/300 of that achieved in former research.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-29
... percentage of meals reimbursed at the free rate (currently 1.6 times the ISP) and the threshold value of the... household applications for Free or Reduced Price meals. Under the CE Option, families are not required to submit applications for free or reduced-price meals, and schools are required to provide free meals to...
Walsh, Alex J.; Sharick, Joe T.; Skala, Melissa C.; Beier, Hope T.
2016-01-01
Time-correlated single photon counting (TCSPC) enables acquisition of fluorescence lifetime decays with high temporal resolution within the fluorescence decay. However, many thousands of photons per pixel are required for accurate lifetime decay curve representation, instrument response deconvolution, and lifetime estimation, particularly for two-component lifetimes. TCSPC imaging speed is inherently limited due to the single photon per laser pulse nature and low fluorescence event efficiencies (<10%) required to reduce bias towards short lifetimes. Here, simulated fluorescence lifetime decays are analyzed by SPCImage and SLIM Curve software to determine the limiting lifetime parameters and photon requirements of fluorescence lifetime decays that can be accurately fit. Data analysis techniques to improve fitting accuracy for low photon count data were evaluated. Temporal binning of the decays from 256 time bins to 42 time bins significantly (p<0.0001) improved fit accuracy in SPCImage and enabled accurate fits with low photon counts (as low as 700 photons/decay), a 6-fold reduction in required photons and therefore improvement in imaging speed. Additionally, reducing the number of free parameters in the fitting algorithm by fixing the lifetimes to known values significantly reduced the lifetime component error from 27.3% to 3.2% in SPCImage (p<0.0001) and from 50.6% to 4.2% in SLIM Curve (p<0.0001). Analysis of nicotinamide adenine dinucleotide–lactate dehydrogenase (NADH-LDH) solutions confirmed temporal binning of TCSPC data and a reduced number of free parameters improves exponential decay fit accuracy in SPCImage. Altogether, temporal binning (in SPCImage) and reduced free parameters are data analysis techniques that enable accurate lifetime estimation from low photon count data and enable TCSPC imaging speeds up to 6x and 300x faster, respectively, than traditional TCSPC analysis. PMID:27446663
Increasing chilling reduces heat requirement for floral budbreak in peach
USDA-ARS?s Scientific Manuscript database
Response to chilling temperatures is a critical factor in the suitability of peach [Prunus persica (L.) Batsch] cultivars to moderate climates such as in the southeastern United States. Time of bloom depends on the innate chilling requirement of the cultivar as well as the timing and quantity of co...
Enzymatic hydrolysis and fermentation of corn for fuel alcohol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mullins, J.T.
1985-01-01
The integration of enzyme saccharification with fermentation reduces the total time required to produce acceptable levels of ethanol. The use of a more concentrated mash (84.8 L total mash/bu corn) results in a 26.6% increase in ethanol productivity and a 21.4% increase in beer ethanol concentration compared to standard corn mash (96.6 L total mash/bu corn). Thus, the energy requirement and cost of distillation can be reduced. The addition of waste cola syrup at 30 g invert sugar/L total mash gave a 19% increase in ethanol concentration in the final beer and required only a small increase in period ofmore » fermentation. Surplus laundry starch can replace 30-50% of the weight of corn normally used in fermentation without influencing ethanol production or the time required for fermentation. Both of these waste materials reduce the unit cost of ethanol and demonstrate the value of such substances in ethanol systems.« less
Comparison of time required for traditional versus virtual orthognathic surgery treatment planning.
Wrzosek, M K; Peacock, Z S; Laviv, A; Goldwaser, B R; Ortiz, R; Resnick, C M; Troulis, M J; Kaban, L B
2016-09-01
Virtual surgical planning (VSP) is a tool for predicting complex surgical movements in three dimensions and it may reduce preoperative laboratory time. A prospective study to compare the time required for standard preoperative planning versus VSP was conducted at Massachusetts General Hospital from January 2014 through January 2015. Workflow data for bimaxillary cases planned by both standard techniques and VSP were recorded in real time. Time spent was divided into three parts: (1) obtaining impressions, face-bow mounting, and model preparation; (2) occlusal analysis and modification, model surgery, and splint fabrication; (3) online VSP session. Average times were compared between standard treatment planning (sum of parts 1 and 2) and VSP (sum of parts 1 and 3). Of 41 bimaxillary cases included, 20 were simple (symmetric) and 21 were complex (asymmetry and segmental osteotomies). Average times for parts 1, 2, and 3 were 4.43, 3.01, and 0.67h, respectively. The average time required for standard treatment planning was 7.45h and for VSP was 5.10h, a 31% time reduction (P<0.001). By eliminating all or some components of part 1, time savings may increase to as much as 91%. This study indicates that in an academic setting, VSP reduces the time required for treatment planning of bimaxillary orthognathic surgery cases. Copyright © 2016 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Effect of abdominopelvic abscess drain size on drainage time and probability of occlusion.
Rotman, Jessica A; Getrajdman, George I; Maybody, Majid; Erinjeri, Joseph P; Yarmohammadi, Hooman; Sofocleous, Constantinos T; Solomon, Stephen B; Boas, F Edward
2017-04-01
The purpose of this study is to determine whether larger abdominopelvic abscess drains reduce the time required for abscess resolution or the probability of tube occlusion. 144 consecutive patients who underwent abscess drainage at a single institution were reviewed retrospectively. Larger initial drain size did not reduce drainage time, drain occlusion, or drain exchanges (P > .05). Subgroup analysis did not find any type of collection that benefitted from larger drains. A multivariate model predicting drainage time showed that large collections (>200 mL) required 16 days longer drainage time than small collections (<50 mL). Collections with a fistula to bowel required 17 days longer drainage time than collections without a fistula. Initial drain size and the viscosity of the fluid in the collection had no significant effect on drainage time in the multivariate model. 8 F drains are adequate for initial drainage of most serous and serosanguineous collections. 10 F drains are adequate for initial drainage of most purulent or bloody collections. Copyright © 2016 Elsevier Inc. All rights reserved.
Guidance and control requirements for high-speed Rollout and Turnoff (ROTO)
NASA Technical Reports Server (NTRS)
Goldthorpe, Steve H.; Kernik, Alan C.; Mcbee, Larry S.; Preston, Orv W.
1995-01-01
This report defines the initial requirements for designing a research high-speed rollout and turnoff (ROTO) guidance and control system applicable to transport class aircraft whose purpose is to reduce the average runway occupancy time (ROT) for aircraft operations. The requirements will be used to develop a ROTO system for both automatic and manual piloted operation under normal and reduced visibility conditions. Requirements were determined for nose wheel/rudder steering, braking/reverse thrust, and the navigation system with the aid of a non-real time, three degree-of-freedom MD-11 simulation program incorporating airframe and gear dynamics. The requirements were developed for speeds up to 70 knots using 30 ft exit geometries under dry and wet surface conditions. The requirements were generated under the assumptions that the aircraft landing system meets the current Category III touchdown dispersion requirements and that aircraft interarrival spacing is 2 nautical miles. This effort determined that auto-asymmetric braking is needed to assist steering for aft center-of-gravity aircraft. This report shows various time-history plots of the aircraft performance for the ROTO operation. This effort also investigated the state-of-the-art in the measurement of the runway coefficient of friction for various runway conditions.
1992-09-01
to acquire or develop effective simulation tools to observe the behavior of a RISC implementation as it executes different types of programs . We choose...Performance Computer performance is measured by the amount of the time required to execute a program . Performance encompasses two types of time, elapsed time...and CPU time. Elapsed time is the time required to execute a program from start to finish. It includes latency of input/output activities such as
Design and preliminary assessment of Vanderbilt hand exoskeleton.
Gasser, Benjamin W; Bennett, Daniel A; Durrough, Christina M; Goldfarb, Michael
2017-07-01
This paper presents the design of a hand exoskeleton intended to enable or facilitate bimanual activities of daily living (ADLs) for individuals with chronic upper extremity hemiparesis resulting from stroke. The paper describes design of the battery-powered, self-contained exoskeleton and presents the results of initial testing with a single subject with hemiparesis from stroke. Specifically, an experiment was conducted requiring the subject to repeatedly remove the lid from a water bottle both with and without the hand exoskeleton. The relative times required to remove the lid from the bottles was considerably lower when using the exoskeleton. Specifically, the average amount of time required to grasp the bottle with the paretic hand without the exoskeleton was 25.9 s, with a standard deviation of 33.5 s, while the corresponding average amount of time required to grasp the bottle with the exoskeleton was 5.1 s, with a standard deviation of 1.9 s. Thus, the task time involving the paretic hand was reduced by a factor of five, while the standard deviation was reduced by a factor of 16.
Time-resolved x-ray scattering instrumentation
Borso, C.S.
1985-11-21
An apparatus and method for increased speed and efficiency of data compilation and analysis in real time is presented in this disclosure. Data is sensed and grouped in combinations in accordance with predetermined logic. The combinations are grouped so that a simplified reduced signal results, such as pairwise summing of data values having offsetting algebraic signs, thereby reducing the magnitude of the net pair sum. Bit storage requirements are reduced and speed of data compilation and analysis is increased by manipulation of shorter bit length data values, making real time evaluation possible.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-12
... request for comments. SUMMARY: As part of its continuing effort to reduce paperwork burden and as required... forms of information technology; and ways to further reduce the information burden for small business... responses. Estimated Time Per Response: 30 sec (.0084 hours). Frequency of Response: One time reporting...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-29
... most to their administrative workload and to offer recommendations for reducing that workload. Members... offer recommendations to reduce unnecessary and redundant administrative requirements. Background Over... an awardee's available research time, a figure widely cited in numerous articles and reports. To help...
An analysis of thermal response factors and how to reduce their computational time requirement
NASA Technical Reports Server (NTRS)
Wiese, M. R.
1982-01-01
Te RESFAC2 version of the Thermal Response Factor Program (RESFAC) is the result of numerous modifications and additions to the original RESFAC. These modifications and additions have significantly reduced the program's computational time requirement. As a result of this work, the program is more efficient and its code is both readable and understandable. This report describes what a thermal response factor is; analyzes the original matrix algebra calculations and root finding techniques; presents a new root finding technique and streamlined matrix algebra; supplies ten validation cases and their results.
Field experience with remote monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desrosiers, A.E.
1995-03-01
The Remote Monitoring System (RMS) is a combination of Merlin Gerin detection hardware, digital data communications hardware, and computer software from Bartlett Services, Inc. (BSI) that can improve the conduct of reactor plant operations in several areas. Using the RMS can reduce radiation exposures to radiation protection technicians (RPTs), reduce radiation exposures to plant maintenance and operations personnel, and reduce the time required to complete maintenance and inspections during outages. The number of temporary RPTs required during refueling outages can also be reduced. Data from use of the RMS at a two power plants are presented to illustrate these points.
Reverse time migration by Krylov subspace reduced order modeling
NASA Astrophysics Data System (ADS)
Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali
2018-04-01
Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.
NASA Astrophysics Data System (ADS)
Salatino, Maria
2017-06-01
In the current submm and mm cosmology experiments the focal planes are populated by kilopixel transition edge sensors (TESes). Varying incoming power load requires frequent rebiasing of the TESes through standard current-voltage (IV) acquisition. The time required to perform IVs on such large arrays and the resulting transient heating of the bath reduces the sky observation time. We explore a bias step method that significantly reduces the time required for the rebiasing process. This exploits the detectors' responses to the injection of a small square wave signal on top of the dc bias current and knowledge of the shape of the detector transition R(T,I). This method has been tested on two detector arrays of the Atacama Cosmology Telescope (ACT). In this paper, we focus on the first step of the method, the estimate of the TES %Rn.
Separation of metadata and pixel data to speed DICOM tag morphing.
Ismail, Mahmoud; Philbin, James
2013-01-01
The DICOM information model combines pixel data and metadata in single DICOM object. It is not possible to access the metadata separately from the pixel data. There are use cases where only metadata is accessed. The current DICOM object format increases the running time of those use cases. Tag morphing is one of those use cases. Tag morphing includes deletion, insertion or manipulation of one or more of the metadata attributes. It is typically used for order reconciliation on study acquisition or to localize the issuer of patient ID (IPID) and the patient ID attributes when data from one domain is transferred to a different domain. In this work, we propose using Multi-Series DICOM (MSD) objects, which separate metadata from pixel data and remove duplicate attributes, to reduce the time required for Tag Morphing. The time required to update a set of study attributes in each format is compared. The results show that the MSD format significantly reduces the time required for tag morphing.
Cross-correlation least-squares reverse time migration in the pseudo-time domain
NASA Astrophysics Data System (ADS)
Li, Qingyang; Huang, Jianping; Li, Zhenchun
2017-08-01
The least-squares reverse time migration (LSRTM) method with higher image resolution and amplitude is becoming increasingly popular. However, the LSRTM is not widely used in field land data processing because of its sensitivity to the initial migration velocity model, large computational cost and mismatch of amplitudes between the synthetic and observed data. To overcome the shortcomings of the conventional LSRTM, we propose a cross-correlation least-squares reverse time migration algorithm in pseudo-time domain (PTCLSRTM). Our algorithm not only reduces the depth/velocity ambiguities, but also reduces the effect of velocity error on the imaging results. It relieves the accuracy requirements on the migration velocity model of least-squares migration (LSM). The pseudo-time domain algorithm eliminates the irregular wavelength sampling in the vertical direction, thus it can reduce the vertical grid points and memory requirements used during computation, which makes our method more computationally efficient than the standard implementation. Besides, for field data applications, matching the recorded amplitudes is a very difficult task because of the viscoelastic nature of the Earth and inaccuracies in the estimation of the source wavelet. To relax the requirement for strong amplitude matching of LSM, we extend the normalized cross-correlation objective function to the pseudo-time domain. Our method is only sensitive to the similarity between the predicted and the observed data. Numerical tests on synthetic and land field data confirm the effectiveness of our method and its adaptability for complex models.
NASA Technical Reports Server (NTRS)
Kreider, Kevin L.; Baumeister, Kenneth J.
1996-01-01
An explicit finite difference real time iteration scheme is developed to study harmonic sound propagation in aircraft engine nacelles. To reduce storage requirements for future large 3D problems, the time dependent potential form of the acoustic wave equation is used. To insure that the finite difference scheme is both explicit and stable for a harmonic monochromatic sound field, a parabolic (in time) approximation is introduced to reduce the order of the governing equation. The analysis begins with a harmonic sound source radiating into a quiescent duct. This fully explicit iteration method then calculates stepwise in time to obtain the 'steady state' harmonic solutions of the acoustic field. For stability, applications of conventional impedance boundary conditions requires coupling to explicit hyperbolic difference equations at the boundary. The introduction of the time parameter eliminates the large matrix storage requirements normally associated with frequency domain solutions, and time marching attains the steady-state quickly enough to make the method favorable when compared to frequency domain methods. For validation, this transient-frequency domain method is applied to sound propagation in a 2D hard wall duct with plug flow.
NASA Technical Reports Server (NTRS)
West, M. E.
1992-01-01
A real-time estimation filter which reduces sensitivity to system variations and reduces the amount of preflight computation is developed for the instrument pointing subsystem (IPS). The IPS is a three-axis stabilized platform developed to point various astronomical observation instruments aboard the shuttle. Currently, the IPS utilizes a linearized Kalman filter (LKF), with premission defined gains, to compensate for system drifts and accumulated attitude errors. Since the a priori gains are generated for an expected system, variations result in a suboptimal estimation process. This report compares the performance of three real-time estimation filters with the current LKF implementation. An extended Kalman filter and a second-order Kalman filter are developed to account for the system nonlinearities, while a linear Kalman filter implementation assumes that the nonlinearities are negligible. The performance of each of the four estimation filters are compared with respect to accuracy, stability, settling time, robustness, and computational requirements. It is shown, that for the current IPS pointing requirements, the linear Kalman filter provides improved robustness over the LKF with less computational requirements than the two real-time nonlinear estimation filters.
IoGET: Internet of Geophysical and Environmental Things
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mudunuru, Maruti Kumar
The objective of this project is to provide novel and fast reduced-order models for onboard computation at sensor nodes for real-time analysis. The approach will require that LANL perform high-fidelity numerical simulations, construct simple reduced-order models (ROMs) using machine learning and signal processing algorithms, and use real-time data analysis for ROMs and compressive sensing at sensor nodes.
Ng, Simon S M; Leung, Wing Wa; Mak, Tony W C; Hon, Sophie S F; Li, Jimmy C M; Wong, Cherry Y N; Tsoi, Kelvin K F; Lee, Janet F Y
2013-02-01
We investigated the efficacy of electroacupuncture in reducing the duration of postoperative ileus and hospital stay after laparoscopic surgery for colorectal cancer. We performed a prospective study of 165 patients undergoing elective laparoscopic surgery for colonic and upper rectal cancer, enrolled from October 2008 to October 2010. Patients were assigned randomly to groups that received electroacupuncture (n = 55) or sham acupuncture (n = 55), once daily from postoperative days 1-4, or no acupuncture (n = 55). The acupoints Zusanli, Sanyinjiao, Hegu, and Zhigou were used. The primary outcome was time to defecation. Secondary outcomes included postoperative analgesic requirement, time to ambulation, and length of hospital stay. Patients who received electroacupuncture had a shorter time to defecation than patients who received no acupuncture (85.9 ± 36.1 vs 122.1 ± 53.5 h; P < .001) and length of hospital stay (6.5 ± 2.2 vs 8.5 ± 4.8 days; P = .007). Patients who received electroacupuncture also had a shorter time to defecation than patients who received sham acupuncture (85.9 ± 36.1 vs 107.5 ± 46.2 h; P = .007). Electroacupuncture was more effective than no or sham acupuncture in reducing postoperative analgesic requirement and time to ambulation. In multiple linear regression analysis, an absence of complications and electroacupuncture were associated with a shorter duration of postoperative ileus and hospital stay after the surgery. In a clinical trial, electroacupuncture reduced the duration of postoperative ileus, time to ambulation, and postoperative analgesic requirement, compared with no or sham acupuncture, after laparoscopic surgery for colorectal cancer. ClinicalTrials.gov number, NCT00464425. Copyright © 2013 AGA Institute. Published by Elsevier Inc. All rights reserved.
Computational methods for aerodynamic design using numerical optimization
NASA Technical Reports Server (NTRS)
Peeters, M. F.
1983-01-01
Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.
Control system estimation and design for aerospace vehicles with time delay
NASA Technical Reports Server (NTRS)
Allgaier, G. R.; Williams, T. L.
1972-01-01
The problems of estimation and control of discrete, linear, time-varying systems are considered. Previous solutions to these problems involved either approximate techniques, open-loop control solutions, or results which required excessive computation. The estimation problem is solved by two different methods, both of which yield the identical algorithm for determining the optimal filter. The partitioned results achieve a substantial reduction in computation time and storage requirements over the expanded solution, however. The results reduce to the Kalman filter when no delays are present in the system. The control problem is also solved by two different methods, both of which yield identical algorithms for determining the optimal control gains. The stochastic control is shown to be identical to the deterministic control, thus extending the separation principle to time delay systems. The results obtained reduce to the familiar optimal control solution when no time delays are present in the system.
NASA Technical Reports Server (NTRS)
Hill, Joanne E.; Black, J. Kevin; Emmett, Thomas J.; Enoto, Teruaki; Jahoda, Keith M.; Kaaret, Philip; Nolan, David S.; Tamagawa, Toru
2014-01-01
The design of the Time-Projection Chamber (TPC) Polarimeter for the Gravity and Extreme Magnetism Small Explorer (GEMS) was demonstrated to Technology Readiness Level 6 (TRL-6)3 and the flight detectors fabricated, assembled and performance tested. A single flight detector was characterized at the Brookhaven National Laboratory Synchrotron Light Source with polarized X-rays at 10 energies from 2.3-8.0 keV at five detector positions. The detector met all of the GEMS performance requirements. Lifetime measurements have shown that the existing flight design has 23 years of lifetime4, opening up the possibility of relaxing material requirements, in particular the consideration of the use of epoxy, to reduce risk elsewhere. We report on design improvements to the GEMS detector to enable a narrower transfer gap that, when operated with a lower transfer field, reduces asymmetries in the detector response. In addition, the new design reduces cost and risk by simplifying the assembly and reducing production time. Finally, we report on the performance of the narrow-gap detector in response to polarized and unpolarized X-rays.
NASA Technical Reports Server (NTRS)
Iida, H. T.
1966-01-01
Computational procedure reduces the numerical effort whenever the method of finite differences is used to solve ablation problems for which the surface recession is large relative to the initial slab thickness. The number of numerical operations required for a given maximum space mesh size is reduced.
Decreasing the temporal complexity for nonlinear, implicit reduced-order models by forecasting
Carlberg, Kevin; Ray, Jaideep; van Bloemen Waanders, Bart
2015-02-14
Implicit numerical integration of nonlinear ODEs requires solving a system of nonlinear algebraic equations at each time step. Each of these systems is often solved by a Newton-like method, which incurs a sequence of linear-system solves. Most model-reduction techniques for nonlinear ODEs exploit knowledge of system's spatial behavior to reduce the computational complexity of each linear-system solve. However, the number of linear-system solves for the reduced-order simulation often remains roughly the same as that for the full-order simulation. We propose exploiting knowledge of the model's temporal behavior to (1) forecast the unknown variable of the reduced-order system of nonlinear equationsmore » at future time steps, and (2) use this forecast as an initial guess for the Newton-like solver during the reduced-order-model simulation. To compute the forecast, we propose using the Gappy POD technique. As a result, the goal is to generate an accurate initial guess so that the Newton solver requires many fewer iterations to converge, thereby decreasing the number of linear-system solves in the reduced-order-model simulation.« less
Effects of Using Pozzolan and Portland Cement in the Treatment of Dispersive Clay
Vakili, A. H.; Selamat, M. R.; Moayedi, H.
2013-01-01
Use of dispersive clay as construction material requires treatment such as by chemical addition. Treatments to dispersive clay using pozzolan and Portland cement, singly and simultaneously, were carried out in this study. When used alone, the optimum amount of pozzolan required to treat a fully dispersive clay sample was 5%, but the curing time to reduce dispersion potential, from 100% to 30% or less, was 3 month long. On the other hand, also when used alone, a 3% cement content was capable of reducing dispersion potential to almost zero percent in only 7 days; and a 2% cement content was capable of achieving similar result in 14 days. However, treatment by cement alone is costly and could jeopardize the long term performance. Thus, a combined 5% pozzolan and 1.5% cement content was found capable of reducing dispersion potential from 100% to zero percent in 14 days. The results indicate that although simultaneous treatment with pozzolan and cement would extend the required curing time in comparison to treatment by cement alone of a higher content, the task could still be carried out in a reasonable period of curing time while avoiding the drawbacks of using either pozzolan or cement alone. PMID:23864828
Effects of using pozzolan and Portland cement in the treatment of dispersive clay.
Vakili, A H; Selamat, M R; Moayedi, H
2013-01-01
Use of dispersive clay as construction material requires treatment such as by chemical addition. Treatments to dispersive clay using pozzolan and Portland cement, singly and simultaneously, were carried out in this study. When used alone, the optimum amount of pozzolan required to treat a fully dispersive clay sample was 5%, but the curing time to reduce dispersion potential, from 100% to 30% or less, was 3 month long. On the other hand, also when used alone, a 3% cement content was capable of reducing dispersion potential to almost zero percent in only 7 days; and a 2% cement content was capable of achieving similar result in 14 days. However, treatment by cement alone is costly and could jeopardize the long term performance. Thus, a combined 5% pozzolan and 1.5% cement content was found capable of reducing dispersion potential from 100% to zero percent in 14 days. The results indicate that although simultaneous treatment with pozzolan and cement would extend the required curing time in comparison to treatment by cement alone of a higher content, the task could still be carried out in a reasonable period of curing time while avoiding the drawbacks of using either pozzolan or cement alone.
2008-08-18
fidelity will be used to reduce the massive experimental testing and associated time required for qualification of new materials. Tools and...develping a model of the thermo-oxidative process for polymer systems, that incorporates the effects of reaction rates, Fickian diffusion, time varying...degradation processes. Year: 2005 Month: 12 Not required at this time . AIR FORCE OFFICE OF SCIENTIFIC KESEARCH 04 SEP 2008 Page 2 of 2 DTIC Data
NASA Astrophysics Data System (ADS)
Pravdivtsev, Andrey V.
2012-06-01
The article presents the approach to the design wide-angle optical systems with special illumination and instantaneous field of view (IFOV) requirements. The unevenness of illumination reduces the dynamic range of the system, which negatively influence on the system ability to perform their task. The result illumination on the detector depends among other factors from the IFOV changes. It is also necessary to consider IFOV in the synthesis of data processing algorithms, as it directly affects to the potential "signal/background" ratio for the case of statistically homogeneous backgrounds. A numerical-analytical approach that simplifies the design of wideangle optical systems with special illumination and IFOV requirements is presented. The solution can be used for optical systems which field of view greater than 180 degrees. Illumination calculation in optical CAD is based on computationally expensive tracing of large number of rays. The author proposes to use analytical expression for some characteristics which illumination depends on. The rest characteristic are determined numerically in calculation with less computationally expensive operands, the calculation performs not every optimization step. The results of analytical calculation inserts in the merit function of optical CAD optimizer. As a result we reduce the optimizer load, since using less computationally expensive operands. It allows reducing time and resources required to develop a system with the desired characteristics. The proposed approach simplifies the creation and understanding of the requirements for the quality of the optical system, reduces the time and resources required to develop an optical system, and allows creating more efficient EOS.
JMFA2—a graphically interactive Java program that fits microfibril angle X-ray diffraction data
Steve P. Verrill; David E. Kretschmann; Victoria L. Herian
2006-01-01
X-ray diffraction techniques have the potential to decrease the time required to determine microfibril angles dramatically. In this paper, we discuss the latest version of a curve-fitting toll that permits us to reduce the time required to evaluate MFA X-ray diffraction patterns. Further, because this tool reflects the underlying physics more accurately than existing...
Static Memory Deduplication for Performance Optimization in Cloud Computing.
Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan
2017-04-27
In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.
Static Memory Deduplication for Performance Optimization in Cloud Computing
Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan
2017-01-01
In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible. PMID:28448434
Rational reduction of periodic propagators for off-period observations.
Blanton, Wyndham B; Logan, John W; Pines, Alexander
2004-02-01
Many common solid-state nuclear magnetic resonance problems take advantage of the periodicity of the underlying Hamiltonian to simplify the computation of an observation. Most of the time-domain methods used, however, require the time step between observations to be some integer or reciprocal-integer multiple of the period, thereby restricting the observation bandwidth. Calculations of off-period observations are usually reduced to brute force direct methods resulting in many demanding matrix multiplications. For large spin systems, the matrix multiplication becomes the limiting step. A simple method that can dramatically reduce the number of matrix multiplications required to calculate the time evolution when the observation time step is some rational fraction of the period of the Hamiltonian is presented. The algorithm implements two different optimization routines. One uses pattern matching and additional memory storage, while the other recursively generates the propagators via time shifting. The net result is a significant speed improvement for some types of time-domain calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Etingov, Pavel V.; Ren, Huiying
This paper describes a probabilistic look-ahead contingency analysis application that incorporates smart sampling and high-performance computing (HPC) techniques. Smart sampling techniques are implemented to effectively represent the structure and statistical characteristics of uncertainty introduced by different sources in the power system. They can significantly reduce the data set size required for multiple look-ahead contingency analyses, and therefore reduce the time required to compute them. High-performance-computing (HPC) techniques are used to further reduce computational time. These two techniques enable a predictive capability that forecasts the impact of various uncertainties on potential transmission limit violations. The developed package has been tested withmore » real world data from the Bonneville Power Administration. Case study results are presented to demonstrate the performance of the applications developed.« less
47 CFR 80.213 - Modulation requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... transmission period of 60 seconds followed by a minimum quiescent period four times the duration of the... designed to reduce interference caused by triggering from radar antenna sidelobes. (i) Variable frequency... using frequency agile techniques must include circuitry designed to reduce interference caused by...
Implementing Small-Group Instruction: Insights from Successful Practitioners.
ERIC Educational Resources Information Center
Cooper, James L.; MacGregor, Jean; Smith, Karl A.; Robinson, Pamela
2000-01-01
College faculty who have successfully implemented small-group instruction address common concerns such as: reduced content coverage, reduced amount of learning, need for prerequisite learning, importance of solitary learning, colleagues' concerns, student resistance, logistics, evaluation, use of teaching assistants, and time requirements. (DB)
Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems
Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.; ...
2018-04-30
The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less
Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.
The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less
Accelerating the kiln drying of oak
William T. Simpson
1980-01-01
Reducing kiln-drying time for oak lumber can reduce energy requirements as well as reduce lumber inventories. In this work, l-inch northern red oak and white oak were kiln dried from green by a combination of individual accelerating techniquesâ presurfacing, presteaming, accelerated and smooth schedule, and high-temperature drying below 18 percent moisture content....
Yiannakou, Marinos; Trimikliniotis, Michael; Yiallouras, Christos; Damianou, Christakis
2016-02-01
Due to the heating in the pre-focal field the delay between successive movements in high intensity focused ultrasound (HIFU) are sometimes as long as 60s, resulting to treatment time in the order of 2-3h. Because there is generally a requirement to reduce treatment time, we were motivated to explore alternative transducer motion algorithms in order to reduce pre-focal heating and treatment time. A 1 MHz single element transducer with 4 cm diameter and 10 cm focal length was used. A simulation model was developed that estimates the temperature, thermal dose and lesion development in the pre-focal field. The simulated temperature history that was combined with the motion algorithms produced thermal maps in the pre-focal region. Polyacrylimde gel phantom was used to evaluate the induced pre-focal heating for each motion algorithm used, and also was used to assess the accuracy of the simulation model. Three out of the six algorithms having successive steps close to each other, exhibited severe heating in the pre-focal field. Minimal heating was produced with the algorithms having successive steps apart from each other (square, square spiral and random). The last three algorithms were improved further (with small cost in time), thus eliminating completely the pre-focal heating and reducing substantially the treatment time as compared to traditional algorithms. Out of the six algorithms, 3 were successful in eliminating the pre-focal heating completely. Because these 3 algorithms required no delay between successive movements (except in the last part of the motion), the treatment time was reduced by 93%. Therefore, it will be possible in the future, to achieve treatment time of focused ultrasound therapies shorter than 30 min. The rate of ablated volume achieved with one of the proposed algorithms was 71 cm(3)/h. The intention of this pilot study was to demonstrate that the navigation algorithms play the most important role in reducing pre-focal heating. By evaluating in the future, all commercially available geometries, it will be possible to reduce the treatment time, for thermal ablation protocols intended for oncological targets. Copyright © 2015 Elsevier B.V. All rights reserved.
Sustaining Change: The Answers Are Blowing in the Wind.
ERIC Educational Resources Information Center
Moffett, Cerylle A.
2000-01-01
Sustaining reform requires district leaders to develop a supportive infrastructure, nurture professional communities, reduce turnover, and use facilitators to build capacity. Bringing educators up to speed means providing abundant staff development, balancing pressure with support, providing adult learning time, and reducing fragmentation and…
A high-efficiency HPGe coincidence system for environmental analysis.
Britton, R; Davies, A V; Burnett, J L; Jackson, M J
2015-08-01
The Comprehensive Nuclear-Test-Ban Treaty (CTBT) is supported by a network of certified laboratories which must meet certain sensitivity requirements for CTBT relevant radionuclides. At the UK CTBT Radionuclide Laboratory (GBL15), a high-efficiency, dual-detector gamma spectroscopy system has been developed to improve the sensitivity of measurements for treaty compliance, greatly reducing the time required for each sample. Utilising list-mode acquisition, each sample can be counted once, and processed multiple times to further improve sensitivity. For the 8 key radionuclides considered, Minimum Detectable Activities (MDA's) were improved by up to 37% in standard mode (when compared to a typical CTBT detector system), with the acquisition time required to achieve the CTBT sensitivity requirements reduced from 6 days to only 3. When utilising the system in coincidence mode, the MDA for (60) Co in a high-activity source was improved by a factor of 34 when compared to a standard CTBT detector, and a factor of 17 when compared to the dual-detector system operating in standard mode. These MDA improvements will allow the accurate and timely quantification of radionuclides that decay via both singular and cascade γ emission, greatly enhancing the effectiveness of CTBT laboratories. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Regan, R. Steve; Niswonger, Richard G.; Markstrom, Steven L.; Barlow, Paul M.
2015-10-02
The spin-up simulation should be run for a sufficient length of time necessary to establish antecedent conditions throughout a model domain. Each GSFLOW application can require different lengths of time to account for the hydrologic stresses to propagate through a coupled groundwater and surface-water system. Typically, groundwater hydrologic processes require many years to come into equilibrium with dynamic climate and other forcing (or stress) data, such as precipitation and well pumping, whereas runoff-dominated surface-water processes respond relatively quickly. Use of a spin-up simulation can substantially reduce execution-time requirements for applications where the time period of interest is small compared to the time for hydrologic memory; thus, use of the restart option can be an efficient strategy for forecast and calibration simulations that require multiple simulations starting from the same day.
Effect of steady and time-harmonic magnetic fields on macrosegragation in alloy solidification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Incropera, F.P.; Prescott, P.J.
Buoyancy-induced convection during the solidification of alloys can contribute significantly to the redistribution of alloy constituents, thereby creating large composition gradients in the final ingot. Termed macrosegregation, the condition diminishes the quality of the casting and, in the extreme, may require that the casting be remelted. The deleterious effects of buoyancy-driven flows may be suppressed through application of an external magnetic field, and in this study the effects of both steady and time-harmonic fields have been considered. For a steady magnetic field, extremely large field strengths would be required to effectively dampen convection patterns that contribute to macrosegregation. However, bymore » reducing spatial variations in temperature and composition, turbulent mixing induced by a time-harmonic field reduces the number and severity of segregates in the final casting.« less
Civil helicopter propulsion system reliability and engine monitoring technology assessments
NASA Technical Reports Server (NTRS)
Murphy, J. A.; Zuk, J.
1982-01-01
A study to reduce operating costs of helicopters, particularly directed at the maintenance of the propulsion subsystem, is presented. The tasks of the study consisted of problem definition refinement, technology solutions, diagnostic system concepts, and emergency power augmentation. Quantifiable benefits (reduced fuel consumption, on-condition engine maintenance, extended drive system overhaul periods, and longer oil change intervals) would increase the initial cost by $43,000, but the benefit of $24.46 per hour would result in breakeven at 1758 hours. Other benefits not capable of being quantified but perhaps more important include improved aircraft avilability due to reduced maintenance time, potential for increased operating limits due to continuous automatic monitoring of gages, and less time and fuel required to make engine power checks. The most important improvement is the on-condition maintenance program, which will require the development of algorithms, equipment, and procedures compatible with all operating environments.
Behavioral Health and Performance (BHP) Work-Rest Cycles
NASA Technical Reports Server (NTRS)
Leveton, Lauren B.; Whitmire, Alexandra
2011-01-01
BHP Program Element Goal: Identify, characterize, and prevent or reduce behavioral health and performance risks associated with space travel, exploration and return to terrestrial life. BHP Requirements: a) Characterize and assess risks (e.g., likelihood and consequences). b) Develop tools and technologies to prevent, monitor, and treat adverse outcomes. c) Inform standards. d) Develop technologies to: 1) reduce risks and human systems resource requirements (e.g., crew time, mass, volume, power) and 2) ensure effective human-system integration across exploration mission.
2012-11-02
Scanning Technology (3D LST) and Collaborative Product Lifecycle Management (CPLM) are two technologies that are currently being leveraged by international ... international ship construction organizations to achieve significant cost savings. 3D LST dramatically reduces the time required to scan ship surfaces as...technology does not meet the accuracy requirements, 0.030” accuracy minimum , for naval shipbuilding. The report delivered to the CSNT shows that if the
Software beamforming: comparison between a phased array and synthetic transmit aperture.
Li, Yen-Feng; Li, Pai-Chi
2011-04-01
The data-transfer and computation requirements are compared between software-based beamforming using a phased array (PA) and a synthetic transmit aperture (STA). The advantages of a software-based architecture are reduced system complexity and lower hardware cost. Although this architecture can be implemented using commercial CPUs or GPUs, the high computation and data-transfer requirements limit its real-time beamforming performance. In particular, transferring the raw rf data from the front-end subsystem to the software back-end remains challenging with current state-of-the-art electronics technologies, which offset the cost advantage of the software back end. This study investigated the tradeoff between the data-transfer and computation requirements. Two beamforming methods based on a PA and STA, respectively, were used: the former requires a higher data transfer rate and the latter requires more memory operations. The beamformers were implemente;d in an NVIDIA GeForce GTX 260 GPU and an Intel core i7 920 CPU. The frame rate of PA beamforming was 42 fps with a 128-element array transducer, with 2048 samples per firing and 189 beams per image (with a 95 MB/frame data-transfer requirement). The frame rate of STA beamforming was 40 fps with 16 firings per image (with an 8 MB/frame data-transfer requirement). Both approaches achieved real-time beamforming performance but each had its own bottleneck. On the one hand, the required data-transfer speed was considerably reduced in STA beamforming, whereas this required more memory operations, which limited the overall computation time. The advantages of the GPU approach over the CPU approach were clearly demonstrated.
Course Development Cycle Time: A Framework for Continuous Process Improvement.
ERIC Educational Resources Information Center
Lake, Erinn
2003-01-01
Details Edinboro University's efforts to reduce the extended cycle time required to develop new courses and programs. Describes a collaborative process improvement framework, illustrated data findings, the team's recommendations for improvement, and the outcomes of those recommendations. (EV)
Extracellular space preservation aids the connectomic analysis of neural circuits.
Pallotto, Marta; Watkins, Paul V; Fubara, Boma; Singer, Joshua H; Briggman, Kevin L
2015-12-09
Dense connectomic mapping of neuronal circuits is limited by the time and effort required to analyze 3D electron microscopy (EM) datasets. Algorithms designed to automate image segmentation suffer from substantial error rates and require significant manual error correction. Any improvement in segmentation error rates would therefore directly reduce the time required to analyze 3D EM data. We explored preserving extracellular space (ECS) during chemical tissue fixation to improve the ability to segment neurites and to identify synaptic contacts. ECS preserved tissue is easier to segment using machine learning algorithms, leading to significantly reduced error rates. In addition, we observed that electrical synapses are readily identified in ECS preserved tissue. Finally, we determined that antibodies penetrate deep into ECS preserved tissue with only minimal permeabilization, thereby enabling correlated light microscopy (LM) and EM studies. We conclude that preservation of ECS benefits multiple aspects of the connectomic analysis of neural circuits.
Hristovska, Ana-Marija; Kristensen, Billy B; Rasmussen, Marianne A; Rasmussen, Yvonne H; Elving, Lisbeth B; Nielsen, Christian V; Kehlet, Henrik
2014-03-01
To assess the effect of systematic local infiltration analgesia on postoperative pain in vaginal hysterectomy, and describe the technique in detail. A randomized, double-blind, placebo-controlled study following the CONSORT criteria. A university hospital. Thirty-seven patients undergoing vaginal hysterectomy. Patients received high-volume (50 mL) ropivacaine 0.50% (n = 20) or saline (n = 17) infiltration using a systematic technique ensuring uniform delivery to all tissues incised, handled or instrumented during the procedure. Pain, nausea, vomiting and opioid requirements were assessed for 32 h as well as time spent in the post-anesthesia care unit and time to first mobilization. Pain at rest was significantly reduced after one, four and eight hours in the ropivacaine group (p ≤ 0.001-0.01). Pain during coughing was significantly reduced after one and four hours (p ≤ 0.001 and p ≤ 0.003), and pain during movement was significantly reduced after four hours (p ≤ 0.02). Opioid requirements and time spent in the post-anesthesia care unit were significantly reduced in the ropivacaine group (p < 0.001 and p < 0.001, respectively), as well as the time to first mobilization (p < 0.001). Intra-operative systematic local infiltration analgesia reduces postoperative pain in patients undergoing vaginal hysterectomy, facilities mobilization and improves early recovery. © 2013 Nordic Federation of Societies of Obstetrics and Gynecology.
Optimal boarding method for airline passengers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steffen, Jason H.; /Fermilab
2008-02-01
Using a Markov Chain Monte Carlo optimization algorithm and a computer simulation, I find the passenger ordering which minimizes the time required to board the passengers onto an airplane. The model that I employ assumes that the time that a passenger requires to load his or her luggage is the dominant contribution to the time needed to completely fill the aircraft. The optimal boarding strategy may reduce the time required to board and airplane by over a factor of four and possibly more depending upon the dimensions of the aircraft. I explore some features of the optimal boarding method andmore » discuss practical modifications to the optimal. Finally, I mention some of the benefits that could come from implementing an improved passenger boarding scheme.« less
Turner, Karly M.; Peak, James; Burne, Thomas H. J.
2016-01-01
Neuropsychiatric research has utilized cognitive testing in rodents to improve our understanding of cognitive deficits and for preclinical drug development. However, more sophisticated cognitive tasks have not been as widely exploited due to low throughput and the extensive training time required. We developed a modified signal detection task (SDT) based on the growing body of literature aimed at improving cognitive testing in rodents. This study directly compares performance on the modified SDT with a traditional test for measuring attention, the 5-choice serial reaction time task (5CSRTT). Adult male Sprague-Dawley rats were trained on either the 5CSRTT or the SDT. Briefly, the 5CSRTT required rodents to pay attention to a spatial array of five apertures and respond with a nose poke when an aperture was illuminated. The SDT required the rat to attend to a light panel and respond either left or right to indicate the presence of a signal. In addition, modifications were made to the reward delivery, timing, control of body positioning, and the self-initiation of trials. It was found that less training time was required for the SDT, with both sessions to criteria and daily session duration significantly reduced. Rats performed with a high level of accuracy (>87%) on both tasks, however omissions were far more frequent on the 5CSRTT. The signal duration was reduced on both tasks as a manipulation of task difficulty relevant to attention and a similar pattern of decreasing accuracy was observed on both tasks. These results demonstrate some of the advantages of the SDT over the traditional 5CSRTT as being higher throughput with reduced training time, fewer omission responses and their body position was controlled at stimulus onset. In addition, rats performing the SDT had comparable high levels of accuracy. These results highlight the differences and similarities between the 5CSRTT and a modified SDT as tools for assessing attention in preclinical animal models. PMID:26834597
Anticipating and controlling mask costs within EDA physical design
NASA Astrophysics Data System (ADS)
Rieger, Michael L.; Mayhew, Jeffrey P.; Melvin, Lawrence S.; Lugg, Robert M.; Beale, Daniel F.
2003-08-01
For low k1 lithography, more aggressive OPC is being applied to critical layers, and the number of mask layers with OPC treatments is growing rapidly. The 130 nm, process node required, on average, 8 layers containing rules- or model-based OPC. The 90 nm node will have 16 OPC layers, of which 14 layers contain aggressive model-based OPC. This escalation of mask pattern complexity, coupled with the predominant use of vector-scan e-beam (VSB) mask writers contributes to the rising costs of advanced mask sets. Writing times for OPC layouts are several times longer than for traditional layouts, making mask exposure the single largest cost component for OPC masks. Lower mask yields, another key factor in higher mask costs, is also aggravated by OPC. Historical mask set costs are plotted below. The initial cost of a 90 nm-node mask set will exceed one million dollars. The relative impact of mask cost on chip depends on how many total wafers are printed with each mask set. For many foundry chips, where unit production is often well below 1000 wafers, mask costs are larger than wafer processing costs. Further increases in NRE may begin to discourage these suppliers' adoption to 90 nm and smaller nodes. In this paper we will outline several alternatives for reducing mask costs by strategically leveraging dimensional margins. Dimensional specifications for a particular masking layer usually are applied uniformly to all features on that layer. As a practical matter, accuracy requirements on different features in the design may vary widely. Take a polysilicon layer, for example: global tolerance specifications for that layer are driven by the transistor-gate requirements; but these parameters over-specify interconnect feature requirements. By identifying features where dimensional accuracy requirements can be reduced, additional margin can be leveraged to reduce OPC complexity. Mask writing time on VSB tools will drop in nearly direct proportion to reduce shot count. By inspecting masks with reference to feature-dependent margins, instead of uniform specifications, mask yield can be effectively increased further reducing delivered mask expense.
Bioactivity profiling using high-throughput in vitro assays can reduce the cost and time required for toxicological screening of environmental chemicals and can also reduce the need for animal testing. Several public efforts are aimed at discovering patterns or classifiers in hig...
Two Improved Algorithms for Envelope and Wavefront Reduction
NASA Technical Reports Server (NTRS)
Kumfert, Gary; Pothen, Alex
1997-01-01
Two algorithms for reordering sparse, symmetric matrices or undirected graphs to reduce envelope and wavefront are considered. The first is a combinatorial algorithm introduced by Sloan and further developed by Duff, Reid, and Scott; we describe enhancements to the Sloan algorithm that improve its quality and reduce its run time. Our test problems fall into two classes with differing asymptotic behavior of their envelope parameters as a function of the weights in the Sloan algorithm. We describe an efficient 0(nlogn + m) time implementation of the Sloan algorithm, where n is the number of rows (vertices), and m is the number of nonzeros (edges). On a collection of test problems, the improved Sloan algorithm required, on the average, only twice the time required by the simpler Reverse Cuthill-Mckee algorithm while improving the mean square wavefront by a factor of three. The second algorithm is a hybrid that combines a spectral algorithm for envelope and wavefront reduction with a refinement step that uses a modified Sloan algorithm. The hybrid algorithm reduces the envelope size and mean square wavefront obtained from the Sloan algorithm at the cost of greater running times. We illustrate how these reductions translate into tangible benefits for frontal Cholesky factorization and incomplete factorization preconditioning.
Assembly flow simulation of a radar
NASA Technical Reports Server (NTRS)
Rutherford, W. C.; Biggs, P. M.
1994-01-01
A discrete event simulation model has been developed to predict the assembly flow time of a new radar product. The simulation was the key tool employed to identify flow constraints. The radar, production facility, and equipment complement were designed, arranged, and selected to provide the most manufacturable assembly possible. A goal was to reduce the assembly and testing cycle time from twenty-six weeks. A computer software simulation package (SLAM 2) was utilized as the foundation for simulating the assembly flow time. FORTRAN subroutines were incorporated into the software to deal with unique flow circumstances that were not accommodated by the software. Detailed information relating to the assembly operations was provided by a team selected from the engineering, manufacturing management, inspection, and production assembly staff. The simulation verified that it would be possible to achieve the cycle time goal of six weeks. Equipment and manpower constraints were identified during the simulation process and adjusted as required to achieve the flow with a given monthly production requirement. The simulation is being maintained as a planning tool to be used to identify constraints in the event that monthly output is increased. 'What-if' studies have been conducted to identify the cost of reducing constraints caused by increases in output requirement.
Programing techniques for CDC equipment
NASA Technical Reports Server (NTRS)
Newsom, J. R.; Tiffany, S. H.
1979-01-01
Five techniques reduce core requirements for fast batch turnaround time and interactive-terminal capability. Same techniques increase program versatility, decrease problem-configuration dependence, and facilitate interprogram communication.
Introduction to Neural Networks.
1992-03-01
parallel processing of information that can greatly reduce the time required to perform operations which are needed in pattern recognition. Neural network, Artificial neural network , Neural net, ANN.
Complex Instruction Set Quantum Computing
NASA Astrophysics Data System (ADS)
Sanders, G. D.; Kim, K. W.; Holton, W. C.
1998-03-01
In proposed quantum computers, electromagnetic pulses are used to implement logic gates on quantum bits (qubits). Gates are unitary transformations applied to coherent qubit wavefunctions and a universal computer can be created using a minimal set of gates. By applying many elementary gates in sequence, desired quantum computations can be performed. This reduced instruction set approach to quantum computing (RISC QC) is characterized by serial application of a few basic pulse shapes and a long coherence time. However, the unitary matrix of the overall computation is ultimately a unitary matrix of the same size as any of the elementary matrices. This suggests that we might replace a sequence of reduced instructions with a single complex instruction using an optimally taylored pulse. We refer to this approach as complex instruction set quantum computing (CISC QC). One trades the requirement for long coherence times for the ability to design and generate potentially more complex pulses. We consider a model system of coupled qubits interacting through nearest neighbor coupling and show that CISC QC can reduce the time required to perform quantum computations.
Rest Intervals Reduce the Number of Loading Bouts Required to Enhance Bone Formation
Srinivasan, Sundar; Ausk, Brandon J.; Bain, Steven D.; Gardiner, Edith M.; Kwon, Ronald Y.; Gross, Ted S.
2015-01-01
Purpose As our society becomes increasingly sedentary, compliance with exercise regimens that require numerous high-energy activities each week become less likely. Alternatively, given an osteogenic exercise intervention that required minimal effort, it is reasonable to presume that participation would be enhanced. Insertion of brief rest-intervals between each cycle of mechanical loading holds potential to achieve this result as substantial osteoblast function is activated by many fewer loading repetitions within each loading bout. Here, we examined the complementary hypothesis that the number of bouts/wk of rest-inserted loading could be reduced from 3/wk without loss of osteogenic efficacy. Methods We conducted a series of 3 wk in vivo experiments that non-invasively exposed the right tibiae of mice to either cyclic (1 Hz) or rest-inserted loading interventions and quantified osteoblast function via dynamic histomorphometry. Results While reducing loading bouts from 3/wk (i.e., 9 total bouts) to 1/wk (3 total bouts) effectively mitigated the osteogenic benefit of cyclic loading, the same reduction did not significantly reduce periosteal bone formation parameters induced by rest-inserted loading. The osteogenic response was robust to the timing of the rest-inserted loading bouts (3 bouts in the first week vs 1 bout/wk for three weeks). However, elimination of any single bout of the three 1/wk bouts mitigated the osteogenic response to rest-inserted loading. Finally, periosteal osteoblast function assessed after the 3 wk intervention was not sensitive to the timing or number of rest-inserted loading bouts. Conclusions We conclude that rest-inserted loading holds potential to retain the osteogenic benefits of mechanical loading with significantly reduced frequency of bouts of activity while also enabling greater flexibility in the timing of the activity. PMID:25207932
Usability of Optical Mark Reader Sheet as an Answering Tool in Testing.
Booka, Masayuki; Oku, Hidehisa; Scheller, Andreas; Yamaoka, Shintaro
2017-01-01
The research result on usability of Optical Mark Reader Sheet (OMRS) being used as the standard answering tool is reported. The use of OMRS significantly requires more answer time than the answer time without OMRS, and the use of assistive devices for OMRS has the possibility to reduce the answer time.
76 FR 56201 - Prescription Drug User Fee Act; Public Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-12
... PDUFA expires in September 2012. At that time, new legislation will be required for FDA to collect... and upgrade its information technology systems. At the same time, FDA committed to complete reviews in...\\ Since PDUFA was enacted, the median approval time of original NDAs and BLAs has been reduced by about 50...
Dynamical jumping real-time fault-tolerant routing protocol for wireless sensor networks.
Wu, Guowei; Lin, Chi; Xia, Feng; Yao, Lin; Zhang, He; Liu, Bing
2010-01-01
In time-critical wireless sensor network (WSN) applications, a high degree of reliability is commonly required. A dynamical jumping real-time fault-tolerant routing protocol (DMRF) is proposed in this paper. Each node utilizes the remaining transmission time of the data packets and the state of the forwarding candidate node set to dynamically choose the next hop. Once node failure, network congestion or void region occurs, the transmission mode will switch to jumping transmission mode, which can reduce the transmission time delay, guaranteeing the data packets to be sent to the destination node within the specified time limit. By using feedback mechanism, each node dynamically adjusts the jumping probabilities to increase the ratio of successful transmission. Simulation results show that DMRF can not only efficiently reduce the effects of failure nodes, congestion and void region, but also yield higher ratio of successful transmission, smaller transmission delay and reduced number of control packets.
A Simple Exoskeleton That Assists Plantarflexion Can Reduce the Metabolic Cost of Human Walking
Malcolm, Philippe; Derave, Wim; Galle, Samuel; De Clercq, Dirk
2013-01-01
Background Even though walking can be sustained for great distances, considerable energy is required for plantarflexion around the instant of opposite leg heel contact. Different groups attempted to reduce metabolic cost with exoskeletons but none could achieve a reduction beyond the level of walking without exoskeleton, possibly because there is no consensus on the optimal actuation timing. The main research question of our study was whether it is possible to obtain a higher reduction in metabolic cost by tuning the actuation timing. Methodology/Principal Findings We measured metabolic cost by means of respiratory gas analysis. Test subjects walked with a simple pneumatic exoskeleton that assists plantarflexion with different actuation timings. We found that the exoskeleton can reduce metabolic cost by 0.18±0.06 W kg−1 or 6±2% (standard error of the mean) (p = 0.019) below the cost of walking without exoskeleton if actuation starts just before opposite leg heel contact. Conclusions/Significance The optimum timing that we found concurs with the prediction from a mathematical model of walking. While the present exoskeleton was not ambulant, measurements of joint kinetics reveal that the required power could be recycled from knee extension deceleration work that occurs naturally during walking. This demonstrates that it is theoretically possible to build future ambulant exoskeletons that reduce metabolic cost, without power supply restrictions. PMID:23418524
Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko
2017-07-10
This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.
Design Challenges of a Rapid Cycling Synchrotron for Carbon/Proton Therapy
NASA Astrophysics Data System (ADS)
Cook, Nathan
2012-03-01
The growing interest in radiation therapy with protons and light ions has driven demand for new methods of ion acceleration and the delivery of ion beams. One exciting new platform for ion beam acceleration and delivery is the rapid cycling synchrotron. Operating at 15Hz, rapid cycling achieves faster treatment times by making beam extraction possible at any energy during the cycle. Moreover, risk to the patient is reduced by requiring fewer particles in the beam line at a given time, thus eliminating the need for passive filtering and reducing the consequences of a malfunction. Lastly, the ability to switch between carbon ion and proton beam therapy provides the machine with an unmatched flexibility. However, these features do stipulate challenges in accelerator design. Maintaining a compact lattice requires careful tuning of lattice functions, tight focusing combined function magnets, and fast injection and extraction systems. Providing the necessary acceleration over a short cycle time also necessitates a five-fold frequency swing for carbon ions, further burdening the design requirements of ferrite-driven radiofrequency cavities. We will consider these challenges as well as some solutions selected for our current design.
Cutting medical transcription costs.
Forsman, John A
2003-07-01
Home-based, production-based medical transcription represents a substantial cost-saving opportunity. Fewer employees are required. Office space is not needed. Outsourcing costs are eliminated. Turnaround time is reduced.
Purging of multilayer insulation by gas diffusion
NASA Technical Reports Server (NTRS)
Sumner, I. E.; Spuckler, C. M.
1976-01-01
An experimental investigation was conducted to determine the time required to purge a multilayer insulation (MLI) panel with gaseous helium by means of gas diffusion to obtain a condensable (nitrogen) gas concentration of less than 1 percent within the panel. Two flat, rectangular MLI panel configurations, one incorporating a butt joint, were tested. The insulation panels consisted of 15 double-aluminized Mylar radiation shields separated by double silk net spacers. The test results indicated that the rate which the condensable gas concentration at the edge or at the butt joint of an MLI panel was reduced was a significant factor in the total time required to reduce the condensable gas concentration within the panel to less than 1 percent. The experimental data agreed well with analytical predictions made by using a simple, one-dimensional gas diffusion model in which the boundary conditions at the edge of the MLI panel were time dependent.
Development of composite calibration standard for quantitative NDE by ultrasound and thermography
NASA Astrophysics Data System (ADS)
Dayal, Vinay; Benedict, Zach G.; Bhatnagar, Nishtha; Harper, Adam G.
2018-04-01
Inspection of aircraft components for damage utilizing ultrasonic Non-Destructive Evaluation (NDE) is a time intensive endeavor. Additional time spent during aircraft inspections translates to added cost to the company performing them, and as such, reducing this expenditure is of great importance. There is also great variance in the calibration samples from one entity to another due to a lack of a common calibration set. By characterizing damage types, we can condense the required calibration sets and reduce the time required to perform calibration while also providing procedures for the fabrication of these standard sets. We present here our effort to fabricate composite samples with known defects and quantify the size and location of defects, such as delaminations, and impact damage. Ultrasonic and Thermographic images are digitally enhanced to accurately measure the damage size. Ultrasonic NDE is compared with thermography.
Energy-Efficient Scheduling for Hybrid Tasks in Control Devices for the Internet of Things
Gao, Zhigang; Wu, Yifan; Dai, Guojun; Xia, Haixia
2012-01-01
In control devices for the Internet of Things (IoT), energy is one of the critical restriction factors. Dynamic voltage scaling (DVS) has been proved to be an effective method for reducing the energy consumption of processors. This paper proposes an energy-efficient scheduling algorithm for IoT control devices with hard real-time control tasks (HRCTs) and soft real-time tasks (SRTs). The main contribution of this paper includes two parts. First, it builds the Hybrid tasks with multi-subtasks of different function Weight (HoW) task model for IoT control devices. HoW describes the structure of HRCTs and SRTs, and their properties, e.g., deadlines, execution time, preemption properties, and energy-saving goals, etc. Second, it presents the Hybrid Tasks' Dynamic Voltage Scaling (HTDVS) algorithm. HTDVS first sets the slowdown factors of subtasks while meeting the different real-time requirements of HRCTs and SRTs, and then dynamically reclaims, reserves, and reuses the slack time of the subtasks to meet their ideal energy-saving goals. Experimental results show HTDVS can reduce energy consumption about 10%–80% while meeting the real-time requirements of HRCTs, HRCTs help to reduce the deadline miss ratio (DMR) of systems, and HTDVS has comparable performance with the greedy algorithm and is more favorable to keep the subtasks' ideal speeds. PMID:23112659
Risk/Requirements Trade-off Guidelines for Low Cost Satellite Systems
NASA Technical Reports Server (NTRS)
Cornford, Steven L.; Man, Kin F.
1996-01-01
The accelerating trend toward faster, better, cheaper missions places increasing emphasis on the trade-offs between requirements and risk to reduce cost and development times, while still improving quality and reliability. The Risk/Requirement Trade-off Guidelines discussed in this paper are part of an integrated approach to address the main issues by focusing on the sum of prevention, analysis, control, or test (PACT) processes.
Quality of Service for Real-Time Applications Over Next Generation Data Networks
NASA Technical Reports Server (NTRS)
Ivancic, William; Atiquzzaman, Mohammed; Bai, Haowei; Su, Hongjun; Jain, Raj; Duresi, Arjan; Goyal, Mukyl; Bharani, Venkata; Liu, Chunlei; Kota, Sastri
2001-01-01
This project, which started on January 1, 2000, was funded by NASA Glenn Research Center for duration of one year. The deliverables of the project included the following tasks: Study of QoS mapping between the edge and core networks envisioned in the Next Generation networks will provide us with the QoS guarantees that can be obtained from next generation networks. Buffer management techniques to provide strict guarantees to real-time end-to-end applications through preferential treatment to packets belonging to real-time applications. In particular, use of ECN to help reduce the loss on high bandwidth-delay product satellite networks needs to be studied. Effect of Prioritized Packet Discard to increase goodput of the network and reduce the buffering requirements in the ATM switches. Provision of new IP circuit emulation services over Satellite IP backbones using MPLS will be studied. Determine the architecture and requirements for internetworking ATN and the Next Generation Internet for real-time applications.
Optimal subinterval selection approach for power system transient stability simulation
Kim, Soobae; Overbye, Thomas J.
2015-10-21
Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less
Sensitive high-throughput screening for the detection of reducing sugars.
Mellitzer, Andrea; Glieder, Anton; Weis, Roland; Reisinger, Christoph; Flicker, Karlheinz
2012-01-01
The exploitation of renewable resources for the production of biofuels relies on efficient processes for the enzymatic hydrolysis of lignocellulosic materials. The development of enzymes and strains for these processes requires reliable and fast activity-based screening assays. Additionally, these assays are also required to operate on the microscale and on the high-throughput level. Herein, we report the development of a highly sensitive reducing-sugar assay in a 96-well microplate screening format. The assay is based on the formation of osazones from reducing sugars and para-hydroxybenzoic acid hydrazide. By using this sensitive assay, the enzyme loads and conversion times during lignocellulose hydrolysis can be reduced, thus allowing higher throughput. The assay is about five times more sensitive than the widely applied dinitrosalicylic acid based assay and can reliably detect reducing sugars down to 10 μM. The assay-specific variation over one microplate was determined for three different lignocellulolytic enzymes and ranges from 2 to 8%. Furthermore, the assay was combined with a microscale cultivation procedure for the activity-based screening of Pichia pastoris strains expressing functional Thermomyces lanuginosus xylanase A, Trichoderma reesei β-mannanase, or T. reesei cellobiohydrolase 2. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Feasibility demonstration of booster cross-over system for 3 1/2 inch SRB/MLP frangible nut system
NASA Technical Reports Server (NTRS)
1983-01-01
Recent testing of the SRB/MLP Frangible Nut System (SOS Part Number 114850-9/Boosters P/N 114848-3) at NASA indicated a need to reduce the function time between boosters (2) within a single frangible nut. These boosters are initiated separately by electrical impulse(s). Coupling the output of each detonator with an explosive cross-over would reduce the function time between boosters (independent of electrical impulse) while providing additional redundancy to the system. The objectives of this program were to: provide an explosive cross-over between boosters, reduce function time between boosters to less than one (1) millisecond within a given nut, reduce cost of boosters, be compatible with the existing frangible nut system, and meet requirements of USBI Spec's (nut 10SPC-0030, booster 10SPC-0031).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakakibara, Y.; Yamamoto, K.; Chen, D.
In interferometric cryogenic gravitational wave detectors, there are plans to cool mirrors and their suspension systems (payloads) in order to reduce thermal noise, that is, one of the fundamental noise sources. Because of the large payload masses (several hundred kg in total) and their thermal isolation, a cooling time of several months is required. Our calculation shows that a high-emissivity coating (e.g. a diamond-like carbon (DLC) coating) can reduce the cooling time effectively by enhancing radiation heat transfer. Here, we have experimentally verified the effect of the DLC coating on the reduction of the cooling time.
NASA Astrophysics Data System (ADS)
Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin
2018-06-01
Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.
LVGEMS Time-of-Flight Mass Spectrometry on Satellites
NASA Technical Reports Server (NTRS)
Herrero, Federico
2013-01-01
NASA fs investigations of the upper atmosphere and ionosphere require measurements of composition of the neutral air and ions. NASA is able to undertake these observations, but the instruments currently in use have their limitations. NASA has extended the scope of its research in the atmosphere and now requires more measurements covering more of the atmosphere. Out of this need, NASA developed multipoint measurements using miniaturized satellites, also called nanosatellites (e.g., CubeSats), that require a new generation of spectrometers that can fit into a 4 4 in. (.10 10 cm) cross-section in the upgraded satellites. Overall, the new mass spectrometer required for the new depth of atmospheric research must fulfill a new level of low-voltage/low-power requirements, smaller size, and less risk of magnetic contamination. The Low-Voltage Gated Electrostatic Mass Spectrometer (LVGEMS) was developed to fulfill these requirements. The LVGEMS offers a new spectrometer that eliminates magnetic field issues associated with magnetic sector mass spectrometers, reduces power, and is about 1/10 the size of previous instruments. LVGEMS employs the time of flight (TOF) technique in the GEMS mass spectrometer previously developed. However, like any TOF mass spectrometer, GEMS requires a rectangular waveform of large voltage amplitude, exceeding 100 V -- that means that the voltage applied to one of the GEMS electrodes has to change from 0 to 100 V in a time of only a few nanoseconds. Such electronic speed requires more power than can be provided in a CubeSat. In the LVGEMS, the amplitude of the rectangular waveform is reduced to about 1 V, compatible with digital electronics supplies and requiring little power.
A Fast MoM Solver (GIFFT) for Large Arrays of Microstrip and Cavity-Backed Antennas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fasenfest, B J; Capolino, F; Wilton, D
2005-02-02
A straightforward numerical analysis of large arrays of arbitrary contour (and possibly missing elements) requires large memory storage and long computation times. Several techniques are currently under development to reduce this cost. One such technique is the GIFFT (Green's function interpolation and FFT) method discussed here that belongs to the class of fast solvers for large structures. This method uses a modification of the standard AIM approach [1] that takes into account the reusability properties of matrices that arise from identical array elements. If the array consists of planar conducting bodies, the array elements are meshed using standard subdomain basismore » functions, such as the RWG basis. The Green's function is then projected onto a sparse regular grid of separable interpolating polynomials. This grid can then be used in a 2D or 3D FFT to accelerate the matrix-vector product used in an iterative solver [2]. The method has been proven to greatly reduce solve time by speeding up the matrix-vector product computation. The GIFFT approach also reduces fill time and memory requirements, since only the near element interactions need to be calculated exactly. The present work extends GIFFT to layered material Green's functions and multiregion interactions via slots in ground planes. In addition, a preconditioner is implemented to greatly reduce the number of iterations required for a solution. The general scheme of the GIFFT method is reported in [2]; this contribution is limited to presenting new results for array antennas made of slot-excited patches and cavity-backed patch antennas.« less
Determining Reduced Order Models for Optimal Stochastic Reduced Order Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonney, Matthew S.; Brake, Matthew R.W.
2015-08-01
The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better representmore » the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.« less
Rao, Vatturi Venkata Satya Prabhakar; Manthri, Ranadheer; Hemalatha, Pottumuthu; Kumar, Vuyyuru Navin; Azhar, Mohammad
2016-01-01
Hot lab dispensing of large doses of 18 fluorine fluorodeoxyglucose in master vials supplied from the cyclotrons requires high degrees of skill to handle high doses. Presently practiced conventional method of fractionating from the inverted tiltable vial pig mounted on a metal frame has its own limitations such as increasing isotope handling times and exposure to the technologist. Innovative technique devised markedly improves the fractionating efficiency along with speed, precision, and reduced dose exposure. PMID:27095872
Military Manpower Training Report for FY 1981.
1980-03-01
1,167 483 102 Fort Benning, GA 2,808 1,081 134 Fort B. Harrison, IN 2,444 379 92 Fort Bliss, TX 1,442 958 265 Fort Bragg, NC 446 618 90 Fort Devens ...management action to reduce the administrative time used to form recruit training platoons. This action reduces the average time in training for new...must be made for course attrition, the number of students entering a course of instruction who fail to complete it. The total input requirement must
Trusting outgroup, but not ingroup members, requires control: neural and behavioral evidence
Ambady, Nalini; Zaki, Jamil
2017-01-01
Abstract Trust and cooperation often break down across group boundaries, contributing to pernicious consequences, from polarized political structures to intractable conflict. As such, addressing such conflicts require first understanding why trust is reduced in intergroup settings. Here, we clarify the structure of intergroup trust using neuroscientific and behavioral methods. We found that trusting ingroup members produced activity in brain areas associated with reward, whereas trusting outgroup members produced activity in areas associated with top-down control. Behaviorally, time pressure—which reduces people’s ability to exert control—reduced individuals’ trust in outgroup, but not ingroup members. These data suggest that the exertion of control can help recover trust in intergroup settings, offering potential avenues for reducing intergroup failures in trust and the consequences of these failures. PMID:27798248
Model-Based Thermal System Design Optimization for the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.
2017-01-01
Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.
Model-based thermal system design optimization for the James Webb Space Telescope
NASA Astrophysics Data System (ADS)
Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.
2017-10-01
Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.
NASA Technical Reports Server (NTRS)
Rouff, Christopher A. (Inventor); Sterritt, Roy (Inventor); Truszkowski, Walter F. (Inventor); Hinchey, Michael G. (Inventor); Gracanin, Denis (Inventor); Rash, James L. (Inventor)
2011-01-01
Described herein is a method that produces fully (mathematically) tractable development of policies for autonomic systems from requirements through to code generation. This method is illustrated through an example showing how user formulated policies can be translated into a formal mode which can then be converted to code. The requirements-based programming method described provides faster, higher quality development and maintenance of autonomic systems based on user formulation of policies.Further, the systems, methods and apparatus described herein provide a way of analyzing policies for autonomic systems and facilities the generation of provably correct implementations automatically, which in turn provides reduced development time, reduced testing requirements, guarantees of correctness of the implementation with respect to the policies specified at the outset, and provides a higher degree of confidence that the policies are both complete and reasonable. The ability to specify the policy for the management of a system and then automatically generate an equivalent implementation greatly improves the quality of software, the survivability of future missions, in particular when the system will operate untended in very remote environments, and greatly reduces development lead times and costs.
It's about Time for Autism Reform Legislation in Utah
ERIC Educational Resources Information Center
Shiozawa, Brian J.
2015-01-01
On 3 April 2014, Governor Gary Herbert signed into law a health insurance reform bill that requires private insurers to cover autism therapy. Specifically, SB57 requires state-regulated health plans to cover applied behavior analysis (ABA) therapy. While early diagnosis and intervention can reduce the long-term cost of autism, families are finding…
A segmented ion engine design for solar electric propulsion systems
NASA Technical Reports Server (NTRS)
Brophy, John R.
1992-01-01
A new ion engine design, called a segmented ion engine, is described which is capable of reducing the required ion source life time for small body rendezvous missions from 18,000 h to about 8,000 h. The use of SAND ion optics for the engine accelerator system makes it possible to substantially reduce the cost of demonstrating the required engine endurance. It is concluded that a flight test of a 5-kW xenon ion propulsion system on the ELITE spacecraft would enormously reduce the cost and risk of using ion propulsion on a planetary vehicle by addressing systems level issues associated with flying a spacecraft radically different from conventional planetary vehicles.
Evaluation of super-water reducers for highway applications
NASA Astrophysics Data System (ADS)
Whiting, D.
1981-03-01
Super-water reducers were characterized and evaluated as potential candidates for production of low water to cement ratio, high strength concretes for highway construction applications. Admixtures were composed of either naphthalene or melamine sulfonated formaldehyde condensates. A mini-slump procedure was used to assess dosage requirements and behavior of workability with time of cement pastes. Required dosage was found to be a function of tricalcium aluminate content, alkali content, and fineness of the cement. Concretes exhibited high rates of slump loss when super-water reducers were used. The most promising area of application of these products appears to be in production of dense, high cement content concrete using mobile concrete mixer/transporters.
Wickham, Fred; McMeekin, Helena; Burniston, Maria; McCool, Daniel; Pencharz, Deborah; Skillen, Annah; Wagner, Thomas
2017-12-01
The purpose of this study is to identify a method for optimising the administered activity and acquisition time for 18 F-FDG PET imaging, yielding images of consistent quality for patients with varying body sizes and compositions, while limiting radiation doses to patients and staff. Patients referred for FDG scans had bioimpedance measurements. They were injected with 3 MBq/kg of 18 F up to 370 MBq and scanned on a Siemens Biograph mCT at 3 or 4 min per bed position. Data were rebinned to simulate 2- and 1-min acquisitions. Subjective assessments of image quality made by an experienced physician were compared with objective measurements based on signal-to-noise ratio and noise equivalent counts (NEC). A target objective measure of image quality was identified. The activity and acquisition time required to achieve this were calculated for each subject. Multiple regression analysis was used to identify expressions for the activity and acquisition time required in terms of easily measurable patient characteristics. One hundred and eleven patients were recruited, and subjective and objective assessments of image quality were compared for 321 full and reduced time scans. NEC-per-metre was identified as the objective measure which best correlated with the subjective assessment (Spearman rank correlation coefficient 0.77) and the best discriminator for images with a subjective assessment of "definitely adequate" (area under the ROC curve 0.94). A target of 37 Mcount/m was identified. Expressions were identified in terms of patient sex, height and weight for the activity and acquisition time required to achieve this target. Including measurements of body composition in these expressions was not useful. Using these expressions would reduce the mean activity administered to this patient group by 66 MBq compared to the current protocol. Expressions have been identified for the activity and acquisition times required to achieve consistent image quality in FDG imaging with reduced patient and staff doses. These expressions might need to be adapted for other systems and reconstruction protocols.
Epstein, Nancy E
2015-01-01
Typically, fibrin sealants (FSs) and fibrin glues (FGs) are used to strengthen dural repairs during spinal surgery. In 2014, Epstein demonstrated that one FS/FG, Tisseel (Baxter International Inc., Westlake Village, CA, USA) equalized the average times to drain removal and length of stay (LOS) for patients with versus without excess bleeding (e.g. who did not receive Tisseel) undergoing multilevel laminectomies with 1-2 level noninstrumented fusions (LamF).[6]. Here Tisseel was utilized to promote hemostasis for two populations; 39 patients undergoing average 4.4 level lumbar laminectomies with average 1.3 level noninstrumented fusions (LamF), and 48 patients undergoing average 4.0 level laminectomies alone (Lam). We compared the average operative time, estimated blood loss (EBL), postoperative drainage, LOS, and transfusion requirements for the LamF versus Lam groups. The average operative times, EBL, postoperative drainage, LOS, and transfusion requirements were all greater for LamF versus Lam patients; operative times (4.1 vs. 3.0 h), average EBL (192.3 vs. 147.9 cc), drainage (e.g. day 1; 199.6 vs. 167.4 cc; day 2; 172.9 vs. 63.9 cc), average LOS (4.6 vs. 2.5 days), and transfusion requirements (11 LamF patients; 18 Units [U] RBC versus 2 Lam patients; 3 U RBC). Utilizing Tisseel to facilitate hemostasis in LamF versus Lam still resulted in greater operative times, EBL, postoperative average drainage, LOS, and transfusion requirements for patients undergoing the noninstrumented fusions. Although Tisseel decreases back bleeding within the spinal canal, it does not reduce blood loss from LamF decorticated transverse processes.
2016-01-01
Microwave irradiation of tissue during fixation and subsequent histochemical staining procedures significantly reduces the time required for incubation in fixation and staining solutions. Minimizing the incubation time in fixative reduces disruption of tissue morphology, and reducing the incubation time in staining solution or antibody solution decreases nonspecific labeling. Reduction of incubation time in staining solution also decreases the level of background noise. Microwave-assisted tissue preparation is applicable for tissue fixation, decalcification of bone tissues, treatment of adipose tissues, antigen retrieval, and other special staining of tissues. Microwave-assisted tissue fixation and staining are useful tools for histological analyses. This review describes the protocols using microwave irradiation for several essential procedures in histochemical studies, and these techniques are applicable to other protocols for tissue fixation and immunostaining in the field of cell biology. PMID:27840640
[CMACPAR an modified parallel neuro-controller for control processes].
Ramos, E; Surós, R
1999-01-01
CMACPAR is a Parallel Neurocontroller oriented to real time systems as for example Control Processes. Its characteristics are mainly a fast learning algorithm, a reduced number of calculations, great generalization capacity, local learning and intrinsic parallelism. This type of neurocontroller is used in real time applications required by refineries, hydroelectric centers, factories, etc. In this work we present the analysis and the parallel implementation of a modified scheme of the Cerebellar Model CMAC for the n-dimensional space projection using a mean granularity parallel neurocontroller. The proposed memory management allows for a significant memory reduction in training time and required memory size.
AnimalFinder: A semi-automated system for animal detection in time-lapse camera trap images
Price Tack, Jennifer L.; West, Brian S.; McGowan, Conor P.; Ditchkoff, Stephen S.; Reeves, Stanley J.; Keever, Allison; Grand, James B.
2017-01-01
Although the use of camera traps in wildlife management is well established, technologies to automate image processing have been much slower in development, despite their potential to drastically reduce personnel time and cost required to review photos. We developed AnimalFinder in MATLAB® to identify animal presence in time-lapse camera trap images by comparing individual photos to all images contained within the subset of images (i.e. photos from the same survey and site), with some manual processing required to remove false positives and collect other relevant data (species, sex, etc.). We tested AnimalFinder on a set of camera trap images and compared the presence/absence results with manual-only review with white-tailed deer (Odocoileus virginianus), wild pigs (Sus scrofa), and raccoons (Procyon lotor). We compared abundance estimates, model rankings, and coefficient estimates of detection and abundance for white-tailed deer using N-mixture models. AnimalFinder performance varied depending on a threshold value that affects program sensitivity to frequently occurring pixels in a series of images. Higher threshold values led to fewer false negatives (missed deer images) but increased manual processing time, but even at the highest threshold value, the program reduced the images requiring manual review by ~40% and correctly identified >90% of deer, raccoon, and wild pig images. Estimates of white-tailed deer were similar between AnimalFinder and the manual-only method (~1–2 deer difference, depending on the model), as were model rankings and coefficient estimates. Our results show that the program significantly reduced data processing time and may increase efficiency of camera trapping surveys.
Bouis, Howarth E
2002-12-01
The fundamental reason that plant breeding using either conventional breeding or biotechnology is so cost-effective is that the benefits of a one-time investment at a central research location can be multiplied over time across nations all over the world. Supplementation and fortification incur the same recurrent costs year after year in country after country. However, each intervention has its own comparative advantages, such that a combination of several interventions is required to substantially reduce micronutrient malnutrition. Improving the density of trace minerals in plants also reduces input requirements and raises crop yields. A simulation model for India and Bangladesh demonstrated that $42 million invested in conventional breeding in developing and planting iron- and zinc-dense varieties of rice and wheat on only 10% of the acreage used for these crops would return $4.9 billion in improved nutrition (including a total of 44 million prevented cases of anemia over 10 years) and higher agricultural productivity.
Development of Electromagnetically Actuated Vacuum Circuit Breaker for 72kV Rated Switchgear
NASA Astrophysics Data System (ADS)
Kim, Tae-Hyun; Tsukima, Mitsuru; Maruyama, Akihiko; Takahara, Osamu; Haruna, Kazushi; Yano, Tomotaka; Matsunaga, Toshihiro; Imamura, Kazuaki; Arioka, Masahiro; Takeuchi, Toshie
A new electromagnetically actuated vacuum circuit breaker (VCB) has been developed for a 72kV rated switchgear. Each phase of this VCB has a plurality of compact electromagnetic actuators linked mechanically providing the required driving energy. The mechanical linkage working as a lever magnifies an actuator stroke to the required stroke of a 72kV rated vacuum interrupter. An electromagnetic analysis coupled with motion, which considers the mechanical linkage of the plural actuators, has been developed for designing the driving behavior of this VCB. Using this analytical method and a quality engineering method known as the Taguchi method, we have clarified effective parameters to reduce the time difference of the driving behavior for tolerance specifications. Moreover, analyzing the oscillatory behavior closing the contacts, a structure of this VCB has been designed to reduce the bounce duration. The developed new VCB has been confirmed that a time difference is short enough and bounce duration is reduced. This VCB is highly reliable against variations in manufacturing and environment.
NASA Technical Reports Server (NTRS)
Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff
2016-01-01
The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.
Variation of Farmer Stock Grade Factors in Semi-Drying Trailers
USDA-ARS?s Scientific Manuscript database
Peanuts are increasingly being loaded into flat bottom semi-drying trailers in the field and transported to peanut buying points for curing, grading, and marketing. Conveyances in excess of 15 t are probed 15 times using the pneumatic sampler requiring considerable time for probing and reducing the...
78 FR 65180 - Airworthiness Directives; MD Helicopters, Inc., Helicopters
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-31
... reducing the retirement life of each tail rotor blade (blade), performing a one-time visual inspection of... required reporting information to the FAA within 24 hours following the one-time inspection. Since we... pitting and the shot peen surface's condition in addition to cracks and corrosion, and adds certain part...
USDA-ARS?s Scientific Manuscript database
Chlorophyll is an indicator of crop health and productivity. Measuring chlorophyll is usually done directly and requires significant time and resources. Indirect measurement of chlorophyll density using a handheld portable chlorophyll meter can reduce time. However, this information is very limit...
ERIC Educational Resources Information Center
Zielinski, Dave
2000-01-01
Managers look at online training as an activity that should be done "off time" whereas employees still think of it as something to be done during working hours. No valid study has shown that online delivery reduces learning time. A better understanding of learning needs must be considered before requiring online training. (JOW)
Ajzenberg, Henry; Newman, Paula; Harris, Gail-Anne; Cranston, Marnie; Boyd, J Gordon
2018-02-01
To reduce medication turnaround times during neurological emergencies, a multidisciplinary team developed a neurological emergency crash trolley in our intensive care unit. This trolley includes phenytoin, hypertonic saline and mannitol, as well as other equipment. The aim of this study was to assess whether the cart reduced turnaround times for these medications. In this retrospective cohort study, medication delivery times for two year epochs before and after its implementation were compared. Eligible patients were identified from our intensive care unit screening log. Adults who required emergent use of phenytoin, hypertonic saline or mannitol while in the intensive care unit were included. Groups were compared with nonparametric analyses. 33-bed general medical-surgical intensive care unit in an academic teaching hospital. Time to medication administration. In the pre-intervention group, there were 43 patients with 66 events. In the post-intervention group, there were 45 patients with 80 events. The median medication turnaround time was significantly reduced after implementation of the neurological emergency trolley (25 vs. 10minutes, p=0.003). There was no statistically significant difference in intensive care or 30-day survival between the two cohorts. The implementation of a novel neurological emergency crash trolley in our intensive care unit reduced medication turnaround times. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sui, Wen-yuan; Ye, Fang; Yang, Jun-lin
2016-04-27
Adolescent idiopathic scoliosis (AIS) surgery usually require prolonged operative times with extensive soft tissue dissection and significant perioperative blood loss, and allogeneic blood products are frequently needed. Methods to reduce the requirement for transfusion would have a beneficial effect on these patients. Although many previous studies have revealed the efficacy of tranexamic acid (TXA) in spinal surgery, there is still a lack of agreement concerning the reduction of both blood loss and transfusion requirements of large dose tranexamic acid (TXA) in surgery for adolescent idiopathic scoliosis (AIS). The objective of this study was to elevate the efficacy and safety of a large dose tranexamic acid (TXA) in reducing transfusion requirements of allogeneic blood products in adolescent idiopathic scoliosis (AIS) surgery using a retrospective study designed with historical control group. One hundred thirty seven consecutive AIS patients who underwent surgery treatment with posterior spinal pedicle systems from August 2011 to March 2015 in our scoliosis center were retrospectively reviewed. Patients were divided into two groups, the TXA group and the historical recruited no TXA group (NTXA). Preoperative demographics, radiographic parameters, operative parameters, estimated blood loss (EBL), total irrigation fluid, number of patients requiring blood transfusion, mean drop of Hb (Pre-op Hb-Post-op Hb), haematocrit pre and post-surgery, mean volume of blood transfusion, hospitalization time, and adverse effect were recorded and compared. All the patients were successfully treated with satisfied clinical and radiographic outcomes. There were 71 patients in the TXA group and 66 patients in the NTXA group. The preoperative demographics were homogeneity between two groups (P > 0.05). There were no significant difference in average operative time between two groups (209 min vs 215 min, p >0.05). Number of patients in the TXA group showed a significant decrease in transfusion requirements with an associated reduced intraoperative blood loss of nearly 45% compared with those in NTXA group (8 vs 37, 619 ml vs 1125 ml, P < 0.05). There were no significant difference in total irrigation fluid between two groups (540 vs 550, p >0.05). Additional, patients in NTXA group showed significant decrease of Hb compared with patients in TXA group (5.2 g/dL vs 3.3 g/dL, P < 0.05), No significant difference were found in hospitalization time between two groups (6.3 vs 7.2 days, P > 0.05). No minor adverse effects associated with use of TXA were noted. Use of large dose tranexamic acid routinely seems to be effective and safe in reducing allogenic blood transfusion and blood loss in adolescent idiopathic scoliosis surgery.
Solar geoengineering to limit the rate of temperature change.
MacMartin, Douglas G; Caldeira, Ken; Keith, David W
2014-12-28
Solar geoengineering has been suggested as a tool that might reduce damage from anthropogenic climate change. Analysis often assumes that geoengineering would be used to maintain a constant global mean temperature. Under this scenario, geoengineering would be required either indefinitely (on societal time scales) or until atmospheric CO2 concentrations were sufficiently reduced. Impacts of climate change, however, are related to the rate of change as well as its magnitude. We thus describe an alternative scenario in which solar geoengineering is used only to constrain the rate of change of global mean temperature; this leads to a finite deployment period for any emissions pathway that stabilizes global mean temperature. The length of deployment and amount of geoengineering required depends on the emissions pathway and allowable rate of change, e.g. in our simulations, reducing the maximum approximately 0.3°C per decade rate of change in an RCP 4.5 pathway to 0.1°C per decade would require geoengineering for 160 years; under RCP 6.0, the required time nearly doubles. We demonstrate that feedback control can limit rates of change in a climate model. Finally, we note that a decision to terminate use of solar geoengineering does not automatically imply rapid temperature increases: feedback could be used to limit rates of change in a gradual phase-out. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
None
2018-01-16
The Red Sky/Red Mesa supercomputing platform dramatically reduces the time required to simulate complex fuel models, from 4-6 months to just 4 weeks, allowing researchers to accelerate the pace at which they can address these complex problems. Its speed also reduces the need for laboratory and field testing, allowing for energy reduction far beyond data center walls.
Extracellular space preservation aids the connectomic analysis of neural circuits
Pallotto, Marta; Watkins, Paul V; Fubara, Boma; Singer, Joshua H; Briggman, Kevin L
2015-01-01
Dense connectomic mapping of neuronal circuits is limited by the time and effort required to analyze 3D electron microscopy (EM) datasets. Algorithms designed to automate image segmentation suffer from substantial error rates and require significant manual error correction. Any improvement in segmentation error rates would therefore directly reduce the time required to analyze 3D EM data. We explored preserving extracellular space (ECS) during chemical tissue fixation to improve the ability to segment neurites and to identify synaptic contacts. ECS preserved tissue is easier to segment using machine learning algorithms, leading to significantly reduced error rates. In addition, we observed that electrical synapses are readily identified in ECS preserved tissue. Finally, we determined that antibodies penetrate deep into ECS preserved tissue with only minimal permeabilization, thereby enabling correlated light microscopy (LM) and EM studies. We conclude that preservation of ECS benefits multiple aspects of the connectomic analysis of neural circuits. DOI: http://dx.doi.org/10.7554/eLife.08206.001 PMID:26650352
Ammonia sanitization of blackwater for safe use as fertilizer.
Fidjeland, Jörgen; Svensson, Sven-Erik; Vinnerås, Björn
2015-01-01
Source-separated blackwater from low-flush toilets contains plant-available nutrients and can be used as a fertilizer. The aim of the study was to evaluate the impact on pathogen inactivation when treating blackwater with urea and/or lime. Blackwater was spiked with Salmonella typhimurium, Escherichia coli O157, Enterococcus faecalis, and Ascaris suum eggs, and treated with urea and/or lime in concentrations up to 0.1% w/w. The bottles were kept in a storage facility (manure slurry tank) for 102 days while monitoring the pathogen concentrations. The treatment time needed to meet the requirement for Salmonella and E. coli reduction could be reduced at least six-fold. The enterococci were more persistent, and only the highest treatment doses had a significantly higher inactivation than the controls. The Ascaris egg viability was only reduced by around 50%, so higher urea/lime doses and/or longer treatment times are required to fulfill the treatment requirements of 3 log10 reductions of parasite eggs.
Lift and Power Required for Flapping Wing Hovering Flight on Mars
NASA Astrophysics Data System (ADS)
Pohly, Jeremy; Sridhar, Madhu; Bluman, James; Kang, Chang-Kwon; Landrum, D. Brian; Fahimi, Farbod; Aono, Hikaru; Liu, Hao
2017-11-01
Achieving flight on Mars is challenging due to the ultra-low density atmosphere. Bio-inspired flapping motion can generate sufficient lift if bumblebee-inspired wings are scaled up between 2 and 4 times their nominal size. However, due to this scaling, the inertial power required to sustain hover increases and dominates over the aerodynamic power. Our results show that a torsional spring placed at the wing root can reduce the flapping power required for hover by efficiently storing and releasing energy while operating at its resonance frequency. The spring assisted reduction in flapping power is demonstrated with a well-validated, coupled Navier-Stokes and flight dynamics solver. The total power is reduced by 79%, whereas the flapping power is reduced by 98%. Such a reduction in power paves the way for an efficient, realizable micro air vehicle capable of vertical takeoff and landing as well as sustained flight on Mars. Alabama Space Grant Consortium Fellowship.
NASA Technical Reports Server (NTRS)
Obrien, Charles J.
1993-01-01
Existing NASA research contracts are supporting development of advanced reinforced polymer and metal matrix composites for use in liquid rocket engines of the future. Advanced rocket propulsion concepts, such as modular platelet engines, dual-fuel dual-expander engines, and variable mixture ratio engines, require advanced materials and structures to reduce overall vehicle weight as well as address specific propulsion system problems related to elevated operating temperatures, new engine components, and unique operating processes. High performance propulsion systems with improved manufacturability and maintainability are needed for single stage to orbit vehicles and other high performance mission applications. One way to satisfy these needs is to develop a small engine which can be clustered in modules to provide required levels of total thrust. This approach should reduce development schedule and cost requirements by lowering hardware lead times and permitting the use of existing test facilities. Modular engines should also reduce operational costs associated with maintenance and parts inventories.
Lin, Jyh-Miin; Patterson, Andrew J; Chang, Hing-Chiu; Gillard, Jonathan H; Graves, Martin J
2015-10-01
To propose a new reduced field-of-view (rFOV) strategy for iterative reconstructions in a clinical environment. Iterative reconstructions can incorporate regularization terms to improve the image quality of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI. However, the large amount of calculations required for full FOV iterative reconstructions has posed a huge computational challenge for clinical usage. By subdividing the entire problem into smaller rFOVs, the iterative reconstruction can be accelerated on a desktop with a single graphic processing unit (GPU). This rFOV strategy divides the iterative reconstruction into blocks, based on the block-diagonal dominant structure. A near real-time reconstruction system was developed for the clinical MR unit, and parallel computing was implemented using the object-oriented model. In addition, the Toeplitz method was implemented on the GPU to reduce the time required for full interpolation. Using the data acquired from the PROPELLER MRI, the reconstructed images were then saved in the digital imaging and communications in medicine format. The proposed rFOV reconstruction reduced the gridding time by 97%, as the total iteration time was 3 s even with multiple processes running. A phantom study showed that the structure similarity index for rFOV reconstruction was statistically superior to conventional density compensation (p < 0.001). In vivo study validated the increased signal-to-noise ratio, which is over four times higher than with density compensation. Image sharpness index was improved using the regularized reconstruction implemented. The rFOV strategy permits near real-time iterative reconstruction to improve the image quality of PROPELLER images. Substantial improvements in image quality metrics were validated in the experiments. The concept of rFOV reconstruction may potentially be applied to other kinds of iterative reconstructions for shortened reconstruction duration.
Hydes, Theresa; Hansi, Navjyot; Trebble, Timothy M
2012-01-01
Upper gastrointestinal (UGI) endoscopy is a routine healthcare procedure with a defined patient pathway. The objective of this study was to redesign this pathway for unsedated patients using lean thinking transformation to focus on patient-derived value-adding steps, remove waste and create a more efficient process. This was to form the basis of a pathway template that was transferrable to other endoscopy units. A literature search of patient expectations for UGI endoscopy identified patient-derived value. A value stream map was created of the current pathway. The minimum and maximum time per step, bottlenecks and staff-staff interactions were recorded. This information was used for service transformation using lean thinking. A patient pathway template was created and implemented into a secondary unit. Questionnaire studies were performed to assess patient satisfaction. In the primary unit the patient pathway reduced from 19 to 11 steps with a reduction in the maximum lead time from 375 to 80 min following lean thinking transformation. The minimum value/lead time ratio increased from 24% to 49%. The patient pathway was redesigned as a 'cellular' system with minimised patient and staff travelling distances, waiting times, paperwork and handoffs. Nursing staff requirements reduced by 25%. Patient-prioritised aspects of care were emphasised with increased patient-endoscopist interaction time. The template was successfully introduced into a second unit with an overall positive patient satisfaction rating of 95%. Lean thinking transformation of the unsedated UGI endoscopy pathway results in reduced waiting times, reduced staffing requirements and improved patient flow and can form the basis of a pathway template which may be successfully transferred into alternative endoscopy environments with high levels of patient satisfaction.
45 CFR 1225.11 - Amount of attorney fees.
Code of Federal Regulations, 2013 CFR
2013-10-01
... appropriate Director. Such agreement shall immediately be reduced to writing. If the complainant, the... following standards: The time and labor required, the novelty and difficulty of the questions, the skills...
45 CFR 1225.11 - Amount of attorney fees.
Code of Federal Regulations, 2012 CFR
2012-10-01
... appropriate Director. Such agreement shall immediately be reduced to writing. If the complainant, the... following standards: The time and labor required, the novelty and difficulty of the questions, the skills...
45 CFR 1225.11 - Amount of attorney fees.
Code of Federal Regulations, 2010 CFR
2010-10-01
... appropriate Director. Such agreement shall immediately be reduced to writing. If the complainant, the... following standards: The time and labor required, the novelty and difficulty of the questions, the skills...
45 CFR 1225.11 - Amount of attorney fees.
Code of Federal Regulations, 2011 CFR
2011-10-01
... appropriate Director. Such agreement shall immediately be reduced to writing. If the complainant, the... following standards: The time and labor required, the novelty and difficulty of the questions, the skills...
45 CFR 1225.11 - Amount of attorney fees.
Code of Federal Regulations, 2014 CFR
2014-10-01
... appropriate Director. Such agreement shall immediately be reduced to writing. If the complainant, the... following standards: The time and labor required, the novelty and difficulty of the questions, the skills...
A fast sequence assembly method based on compressed data structures.
Liang, Peifeng; Zhang, Yancong; Lin, Kui; Hu, Jinglu
2014-01-01
Assembling a large genome using next generation sequencing reads requires large computer memory and a long execution time. To reduce these requirements, a memory and time efficient assembler is presented from applying FM-index in JR-Assembler, called FMJ-Assembler, where FM stand for FMR-index derived from the FM-index and BWT and J for jumping extension. The FMJ-Assembler uses expanded FM-index and BWT to compress data of reads to save memory and jumping extension method make it faster in CPU time. An extensive comparison of the FMJ-Assembler with current assemblers shows that the FMJ-Assembler achieves a better or comparable overall assembly quality and requires lower memory use and less CPU time. All these advantages of the FMJ-Assembler indicate that the FMJ-Assembler will be an efficient assembly method in next generation sequencing technology.
Standby power generation under utility curtailment contract agreements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nolan, G.J.; Puccio, V.J.; Calhoun, C.W.
1995-12-31
Many utilities in the US offer large industrial and commercial customers power sales contracts which have attractive rates under a curtailment requirement. This curtailment requirement allows the utility to require the customer to reduce its power demand to a predetermined level within a specific time period. If the required curtailment is not achieved by the customer within the allocated time period, stiff financial penalties are usually enforced by the utility. The attractiveness of the contract rates usually is proportional to the amount of curtailment required. To take advantage of these attractive rates, a customer must be able to withstand themore » curtailment without supplemental generation or must add standby generation to meet its needs. Obviously, the cost of the curtailments to the customer should not exceed the economic benefits of reduced rates. This paper reviews the alternatives faced by a curtailment contract customer together with potential load shedding and standby generation system designs. An example of implementing a curtailment contract at an existing industrial facility is presented. The example facility, Boeing Helicopters of Philadelphia, Pennsylvania required both load shedding and standby generation. The load shedding scheme is fairly complex and is controlled by a programmable logic controller (PLC). The standby generation and load shedding systems for the example facility are examined in detail. Also, lessons learned from implementing the required modifications to the example facility are discussed.« less
Active learning reduces annotation time for clinical concept extraction.
Kholghi, Mahnoosh; Sitbon, Laurianne; Zuccon, Guido; Nguyen, Anthony
2017-10-01
To investigate: (1) the annotation time savings by various active learning query strategies compared to supervised learning and a random sampling baseline, and (2) the benefits of active learning-assisted pre-annotations in accelerating the manual annotation process compared to de novo annotation. There are 73 and 120 discharge summary reports provided by Beth Israel institute in the train and test sets of the concept extraction task in the i2b2/VA 2010 challenge, respectively. The 73 reports were used in user study experiments for manual annotation. First, all sequences within the 73 reports were manually annotated from scratch. Next, active learning models were built to generate pre-annotations for the sequences selected by a query strategy. The annotation/reviewing time per sequence was recorded. The 120 test reports were used to measure the effectiveness of the active learning models. When annotating from scratch, active learning reduced the annotation time up to 35% and 28% compared to a fully supervised approach and a random sampling baseline, respectively. Reviewing active learning-assisted pre-annotations resulted in 20% further reduction of the annotation time when compared to de novo annotation. The number of concepts that require manual annotation is a good indicator of the annotation time for various active learning approaches as demonstrated by high correlation between time rate and concept annotation rate. Active learning has a key role in reducing the time required to manually annotate domain concepts from clinical free text, either when annotating from scratch or reviewing active learning-assisted pre-annotations. Copyright © 2017 Elsevier B.V. All rights reserved.
Meeting the 80-hour work week requirement: what did we cut?
Chung, Raphael; Ahmed, Naveed; Chen, Peter
2004-01-01
To meet the new accreditation requirement, small programs with limited manpower must make hard decisions to safeguard quality. We devised a system to meet the requirement in our own environment, making the obligatory cuts in educational components as prioritized by the trainees. This study examined what aspect of training is impacted and the residents' perception of the resulting change. In a fully accredited program where the baseline work hours/week exceeded the new requirement by over 20% even with full deployment of physician's assistants, the strategies used included reducing external rotations, transitioning PGY-3 into senior responsibility, and integrating senior rotations to 2 hospitals into 1 (2 weeks/month), so that time in a lower volume hospital helped to bring the monthly average to target. Residents were surveyed at 6-month intervals for their perception of the change. Compared with baseline, the new system averaged 77 +/- 5 hours/week, significantly reduced from before (98 +/- 12, p < 0.01), but with greatly reduced continuity of care (28 +/- 10% vs. 88 +/- 8%, p < 0.001), reduced consultations seen (19 +/- 4 vs. 36 +/- 7 per week, p < 0.001), reduced conference attendance (5.7 vs. 3.5 per week, p < 0.001), and reduced operations (55 +/- 7 vs. 68 +/- 9 per week for the program). External rotations have been reduced by 3 months, and outpatient clinics merged from 5 to 2. Surveys showed improvement in fatigue-related issues for junior residents. Senior residents were dissatisfied with the reduced educational components. Reducing work hours cannot be accomplished without reducing educational components. Unlike junior residents, senior residents felt less fulfilled with the new system and do not benefit in physical fatigue.
20 CFR 726.109 - Increase or reduction in the amount of security.
Code of Federal Regulations, 2010 CFR
2010-04-01
... LABOR FEDERAL COAL MINE HEALTH AND SAFETY ACT OF 1969, AS AMENDED BLACK LUNG BENEFITS; REQUIREMENTS FOR... Office may require. The Office may reduce the amount of security at any time on its own initiative, or upon the application of a self-insurer, when it believes the facts warrant a reduction. A self-insurer...
A Massively Parallel Bayesian Approach to Planetary Protection Trajectory Analysis and Design
NASA Technical Reports Server (NTRS)
Wallace, Mark S.
2015-01-01
The NASA Planetary Protection Office has levied a requirement that the upper stage of future planetary launches have a less than 10(exp -4) chance of impacting Mars within 50 years after launch. A brute-force approach requires a decade of computer time to demonstrate compliance. By using a Bayesian approach and taking advantage of the demonstrated reliability of the upper stage, the required number of fifty-year propagations can be massively reduced. By spreading the remaining embarrassingly parallel Monte Carlo simulations across multiple computers, compliance can be demonstrated in a reasonable time frame. The method used is described here.
Yamanouchi, Satoshi; Ishii, Tadashi; Morino, Kazuma; Furukawa, Hajime; Hozawa, Atsushi; Ochi, Sae; Kushimoto, Shigeki
2014-12-01
When disasters that affect a wide area occur, external medical relief teams play a critical role in the affected areas by helping to alleviate the burden caused by surging numbers of individuals requiring health care. Despite this, no system has been established for managing deployed medical relief teams during the subacute phase following a disaster. After the Great East Japan Earthquake and tsunami, the Ishinomaki Medical Zone was the most severely-affected area. Approximately 6,000 people died or were missing, and the immediate evacuation of approximately 120,000 people to roughly 320 shelters was required. As many as 59 medical teams came to participate in relief activities. Daily coordination of activities and deployment locations became a significant burden to headquarters. The Area-based/Line-linking Support System (Area-Line System) was thus devised to resolve these issues for medical relief and coordinating activities. A retrospective analysis was performed to examine the effectiveness of the medical relief provided to evacuees using the Area-Line System with regards to the activities of the medical relief teams and the coordinating headquarters. The following were compared before and after establishment of the Area-Line System: (1) time required at the coordinating headquarters to collect and tabulate medical records from shelters visited; (2) time required at headquarters to determine deployment locations and activities of all medical relief teams; and (3) inter-area variation in number of patients per team. The time required to collect and tabulate medical records was reduced from approximately 300 to 70 minutes/day. The number of teams at headquarters required to sort through data was reduced from 60 to 14. The time required to determine deployment locations and activities of the medical relief teams was reduced from approximately 150 hours/month to approximately 40 hours/month. Immediately prior to establishment of the Area-Line System, the variation of the number of patients per team was highest. Variation among regions did not increase after establishment of the system. This descriptive analysis indicated that implementation of the Area-Line System, a systematic approach for long-term disaster medical relief across a wide area, can increase the efficiency of relief provision to disaster-stricken areas.
Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Nikbay, Melike; Heeg, Jennifer
2017-01-01
This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.
Enhanced Handoff Scheme for Downlink-Uplink Asymmetric Channels in Cellular Systems
2013-01-01
In the latest cellular networks, data services like SNS and UCC can create asymmetric packet generation rates over the downlink and uplink channels. This asymmetry can lead to a downlink-uplink asymmetric channel condition being experienced by cell edge users. This paper proposes a handoff scheme to cope effectively with downlink-uplink asymmetric channels. The proposed handoff scheme exploits the uplink channel quality as well as the downlink channel quality to determine the appropriate timing and direction of handoff. We first introduce downlink and uplink channel models that consider the intercell interference, to verify the downlink-uplink channel asymmetry. Based on these results, we propose an enhanced handoff scheme that exploits both the uplink and downlink channel qualities to reduce the handoff-call dropping probability and the service interruption time. The simulation results show that the proposed handoff scheme reduces the handoff-call dropping probability about 30% and increases the satisfaction of the service interruption time requirement about 7% under high-offered load, compared to conventional mobile-assisted handoff. Especially, the proposed handoff scheme is more efficient when the uplink QoS requirement is much stricter than the downlink QoS requirement or uplink channel quality is worse than downlink channel quality. PMID:24501576
Utilization of UV Curing Technology to Significantly Reduce the Manufacturing Cost of LIB Electrodes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voelker, Gary; Arnold, John
2015-11-30
Previously identified novel binders and associated UV curing technology have been shown to reduce the time required to apply and finish electrode coatings from tens of minutes to less than one second. This revolutionary approach can result in dramatic increases in process speeds, significantly reduced capital (a factor of 10 to 20) and operating costs, reduced energy requirements, and reduced environmental concerns and costs due to the virtual elimination of harmful volatile organic solvents and associated solvent dryers and recovery systems. The accumulated advantages of higher speed, lower capital and operating costs, reduced footprint, lack of VOC recovery, and reducedmore » energy cost is a reduction of 90% in the manufacturing cost of cathodes. When commercialized, the resulting cost reduction in Lithium batteries will allow storage device manufacturers to expand their sales in the market and thereby accrue the energy savings of broader utilization of HEVs, PHEVs and EVs in the U.S., and a broad technology export market is also envisioned.« less
Heroic Reliability Improvement in Manned Space Systems
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2017-01-01
System reliability can be significantly improved by a strong continued effort to identify and remove all the causes of actual failures. Newly designed systems often have unexpected high failure rates which can be reduced by successive design improvements until the final operational system has an acceptable failure rate. There are many causes of failures and many ways to remove them. New systems may have poor specifications, design errors, or mistaken operations concepts. Correcting unexpected problems as they occur can produce large early gains in reliability. Improved technology in materials, components, and design approaches can increase reliability. The reliability growth is achieved by repeatedly operating the system until it fails, identifying the failure cause, and fixing the problem. The failure rate reduction that can be obtained depends on the number and the failure rates of the correctable failures. Under the strong assumption that the failure causes can be removed, the decline in overall failure rate can be predicted. If a failure occurs at the rate of lambda per unit time, the expected time before the failure occurs and can be corrected is 1/lambda, the Mean Time Before Failure (MTBF). Finding and fixing a less frequent failure with the rate of lambda/2 per unit time requires twice as long, time of 1/(2 lambda). Cutting the failure rate in half requires doubling the test and redesign time and finding and eliminating the failure causes.Reducing the failure rate significantly requires a heroic reliability improvement effort.
The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce
NASA Astrophysics Data System (ADS)
Chen, Xi; Zhou, Liqing
2015-12-01
With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.
NASA Astrophysics Data System (ADS)
Bross, Benjamin; Alvarez-Mesa, Mauricio; George, Valeri; Chi, Chi Ching; Mayer, Tobias; Juurlink, Ben; Schierl, Thomas
2013-09-01
The new High Efficiency Video Coding Standard (HEVC) was finalized in January 2013. Compared to its predecessor H.264 / MPEG4-AVC, this new international standard is able to reduce the bitrate by 50% for the same subjective video quality. This paper investigates decoder optimizations that are needed to achieve HEVC real-time software decoding on a mobile processor. It is shown that HEVC real-time decoding up to high definition video is feasible using instruction extensions of the processor while decoding 4K ultra high definition video in real-time requires additional parallel processing. For parallel processing, a picture-level parallel approach has been chosen because it is generic and does not require bitstreams with special indication.
The electronic, 'paperless' medical office; has it arrived?
Gates, P; Urquhart, J
2007-02-01
Modern information technology offers efficiencies in medical practice, with a reduction in secretarial time in maintaining, filing and retrieving the paper medical record. Electronic requesting of investigations allows tracking of outstanding results. Less storage space is required and telephone calls from pharmacies, pathology and medical imaging service providers to clarify the hand-written request are abolished. Voice recognition software reduces secretarial typing time per letter. These combined benefits can lead to significantly reduced costs and improved patient care. The paperless office is possible, but requires commitment and training of all staff; it is preferable but not absolutely essential that at least one member of the practice has an interest and some expertise in computers. More importantly, back-up from information technology providers and back-up of the electronic data are absolutely crucial and a paperless environment should not be considered without them.
Spacelab Mission Implementation Cost Assessment (SMICA)
NASA Technical Reports Server (NTRS)
Guynes, B. V.
1984-01-01
A total savings of approximately 20 percent is attainable if: (1) mission management and ground processing schedules are compressed; (2) the equipping, staffing, and operating of the Payload Operations Control Center is revised, and (3) methods of working with experiment developers are changed. The development of a new mission implementation technique, which includes mission definition, experiment development, and mission integration/operations, is examined. The Payload Operations Control Center is to relocate and utilize new computer equipment to produce cost savings. Methods of reducing costs by minimizing the Spacelab and payload processing time during pre- and post-mission operation at KSC are analyzed. The changes required to reduce costs in the analytical integration process are studied. The influence of time, requirements accountability, and risk on costs is discussed. Recommendation for cost reductions developed by the Spacelab Mission Implementation Cost Assessment study are listed.
Employment and First-Year College Achievement: The Role of Self-Regulation and Motivation
ERIC Educational Resources Information Center
Huie, Faye C.; Winsler, Adam; Kitsantas, Anastasia
2014-01-01
Students often work in order to meet monetary requirements for college. However, employment reduces the time students can devote to their studies, which can hinder performance. This study examined whether motivation (self-efficacy goal orientation) and self-regulated learning (help-seeking, metacognitive self-regulation, time management and effort…
USDA-ARS?s Scientific Manuscript database
In conservation agriculture, cover crops are utilized to improve soil properties and to enhance cash crop growth. One important part of cover crop management is termination. With smaller profit margins and constraints on time and labor, producers are looking for ways to reduce time and labor require...
A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berres, Anne Sabine; Adhinarayanan, Vignesh; Turton, Terece
2017-05-12
Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline atmore » the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.« less
Real-Time On-Board Processing Validation of MSPI Ground Camera Images
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Werne, Thomas A.; Bekker, Dmitriy L.
2010-01-01
The Earth Sciences Decadal Survey identifies a multiangle, multispectral, high-accuracy polarization imager as one requirement for the Aerosol-Cloud-Ecosystem (ACE) mission. JPL has been developing a Multiangle SpectroPolarimetric Imager (MSPI) as a candidate to fill this need. A key technology development needed for MSPI is on-board signal processing to calculate polarimetry data as imaged by each of the 9 cameras forming the instrument. With funding from NASA's Advanced Information Systems Technology (AIST) Program, JPL is solving the real-time data processing requirements to demonstrate, for the first time, how signal data at 95 Mbytes/sec over 16-channels for each of the 9 multiangle cameras in the spaceborne instrument can be reduced on-board to 0.45 Mbytes/sec. This will produce the intensity and polarization data needed to characterize aerosol and cloud microphysical properties. Using the Xilinx Virtex-5 FPGA including PowerPC440 processors we have implemented a least squares fitting algorithm that extracts intensity and polarimetric parameters in real-time, thereby substantially reducing the image data volume for spacecraft downlink without loss of science information.
Kearney, Peter; Li, Wen-Chin; Yu, Chung-San; Braithwaite, Graham
2018-06-26
This research investigated controller' situation awareness by comparing COOPANS's acoustic alerts with newly designed semantic alerts. The results demonstrate that ATCOs' visual scan patterns had significant differences between acoustic and semantic designs. ATCOs established different eye movement patterns on fixations number, fixation duration and saccade velocity. Effective decision support systems require human-centred design with effective stimuli to direct ATCO's attention to critical events. It is necessary to provide ATCOs with specific alerting information to reflect the nature of of the critical situation in order to minimize the side-effects of startle and inattentional deafness. Consequently, the design of a semantic alert can significantly reduce ATCOs' response time, therefore providing valuable extra time in a time-limited situation to formulate and execute resolution strategies in critical air safety events. The findings of this research indicate that the context-specified design of semantic alerts could improve ATCO's situational awareness and significantly reduce response time in the event of Short Term Conflict Alert activation which alerts to two aircraft having less than the required lateral or vertical separation.
Load Balancing Strategies for Multiphase Flows on Structured Grids
NASA Astrophysics Data System (ADS)
Olshefski, Kristopher; Owkes, Mark
2017-11-01
The computation time required to perform large simulations of complex systems is currently one of the leading bottlenecks of computational research. Parallelization allows multiple processing cores to perform calculations simultaneously and reduces computational times. However, load imbalances between processors waste computing resources as processors wait for others to complete imbalanced tasks. In multiphase flows, these imbalances arise due to the additional computational effort required at the gas-liquid interface. However, many current load balancing schemes are only designed for unstructured grid applications. The purpose of this research is to develop a load balancing strategy while maintaining the simplicity of a structured grid. Several approaches are investigated including brute force oversubscription, node oversubscription through Message Passing Interface (MPI) commands, and shared memory load balancing using OpenMP. Each of these strategies are tested with a simple one-dimensional model prior to implementation into the three-dimensional NGA code. Current results show load balancing will reduce computational time by at least 30%.
Automated Sample Exchange Robots for the Structural Biology Beam Lines at the Photon Factory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiraki, Masahiko; Watanabe, Shokei; Yamada, Yusuke
2007-01-19
We are now developing automated sample exchange robots for high-throughput protein crystallographic experiments for onsite use at synchrotron beam lines. It is part of the fully automated robotics systems being developed at the Photon Factory, for the purposes of protein crystallization, monitoring crystal growth, harvesting and freezing crystals, mounting the crystals inside a hutch and for data collection. We have already installed the sample exchange robots based on the SSRL automated mounting system at our insertion device beam lines BL-5A and AR-NW12A at the Photon Factory. In order to reduce the time required for sample exchange further, a prototype ofmore » a double-tonged system was developed. As a result of preliminary experiments with double-tonged robots, the sample exchange time was successfully reduced from 70 seconds to 10 seconds with the exception of the time required for pre-cooling and warming up the tongs.« less
Umbilical Connect Techniques Improvement-Technology Study
NASA Technical Reports Server (NTRS)
Valkema, Donald C.
1972-01-01
The objective of this study was to develop concepts, specifications, designs, techniques, and procedures capable of significantly reducing the time required to connect and verify umbilicals for ground services to the space shuttle. The desired goal was to reduce the current time requirement of several shifts for the Saturn 5/Apollo to an elapsed time of less than one hour to connect and verify all of the space shuttle ground service umbilicals. The study was conducted in four phases: (1) literature and hardware examination, (2) concept development, (3) concept evaluation and tradeoff analysis, and (4) selected concept design. The final product of this study was a detail design of a rise-off disconnect panel prototype test specimen for a LO2/LH2 booster (or an external oxygen/hydrogen tank for an orbiter), a detail design of a swing-arm mounted preflight umbilical carrier prototype test specimen, and a part 1 specification for the umbilical connect and verification design for the vehicles as defined in the space shuttle program.
A high throughput MATLAB program for automated force-curve processing using the AdG polymer model.
O'Connor, Samantha; Gaddis, Rebecca; Anderson, Evan; Camesano, Terri A; Burnham, Nancy A
2015-02-01
Research in understanding biofilm formation is dependent on accurate and representative measurements of the steric forces related to brush on bacterial surfaces. A MATLAB program to analyze force curves from an AFM efficiently, accurately, and with minimal user bias has been developed. The analysis is based on a modified version of the Alexander and de Gennes (AdG) polymer model, which is a function of equilibrium polymer brush length, probe radius, temperature, separation distance, and a density variable. Automating the analysis reduces the amount of time required to process 100 force curves from several days to less than 2min. The use of this program to crop and fit force curves to the AdG model will allow researchers to ensure proper processing of large amounts of experimental data and reduce the time required for analysis and comparison of data, thereby enabling higher quality results in a shorter period of time. Copyright © 2014 Elsevier B.V. All rights reserved.
Coherent diffractive imaging of time-evolving samples with improved temporal resolution
Ulvestad, A.; Tripathi, A.; Hruszkewycz, S. O.; ...
2016-05-19
Bragg coherent x-ray diffractive imaging is a powerful technique for investigating dynamic nanoscale processes in nanoparticles immersed in reactive, realistic environments. Its temporal resolution is limited, however, by the oversampling requirements of three-dimensional phase retrieval. Here, we show that incorporating the entire measurement time series, which is typically a continuous physical process, into phase retrieval allows the oversampling requirement at each time step to be reduced, leading to a subsequent improvement in the temporal resolution by a factor of 2-20 times. The increased time resolution will allow imaging of faster dynamics and of radiation-dose-sensitive samples. Furthermore, this approach, which wemore » call "chrono CDI," may find use in improving the time resolution in other imaging techniques.« less
NASA Technical Reports Server (NTRS)
Harp, J. L., Jr.; Oatway, T. P.
1975-01-01
A research effort was conducted with the goal of reducing computer time of a Navier Stokes Computer Code for prediction of viscous flow fields about lifting bodies. A two-dimensional, time-dependent, laminar, transonic computer code (STOKES) was modified to incorporate a non-uniform timestep procedure. The non-uniform time-step requires updating of a zone only as often as required by its own stability criteria or that of its immediate neighbors. In the uniform timestep scheme each zone is updated as often as required by the least stable zone of the finite difference mesh. Because of less frequent update of program variables it was expected that the nonuniform timestep would result in a reduction of execution time by a factor of five to ten. Available funding was exhausted prior to successful demonstration of the benefits to be derived from the non-uniform time-step method.
Effect of film-based versus filmless operation on the productivity of CT technologists.
Reiner, B I; Siegel, E L; Hooper, F J; Glasser, D
1998-05-01
To determine the relative time required for a technologist to perform a computed tomographic (CT) examination in a "filmless" versus a film-based environment. Time-motion studies were performed in 204 consecutive CT examinations. Images from 96 examinations were electronically transferred to a picture archiving and communication system (PACS) without being printed to film, and 108 were printed to film. The time required to obtain and electronically transfer the images or print the images to film and make the current and previous studies available to the radiologists for interpretation was recorded. The time required for a technologist to complete a CT examination was reduced by 45% with direct image transfer to the PACS compared with the time required in the film-based mode. This reduction was due to the elimination of a number of steps in the filming process, such as the printing at multiple window or level settings. The use of a PACS can result in the elimination of multiple time-intensive tasks for the CT technologist, resulting in a marked reduction in examination time. This reduction can result in increased productivity, and, hence greater cost-effectiveness with filmless operation.
Computational needs survey of NASA automation and robotics missions. Volume 1: Survey and results
NASA Technical Reports Server (NTRS)
Davis, Gloria J.
1991-01-01
NASA's operational use of advanced processor technology in space systems lags behind its commercial development by more than eight years. One of the factors contributing to this is that mission computing requirements are frequently unknown, unstated, misrepresented, or simply not available in a timely manner. NASA must provide clear common requirements to make better use of available technology, to cut development lead time on deployable architectures, and to increase the utilization of new technology. A preliminary set of advanced mission computational processing requirements of automation and robotics (A&R) systems are provided for use by NASA, industry, and academic communities. These results were obtained in an assessment of the computational needs of current projects throughout NASA. The high percent of responses indicated a general need for enhanced computational capabilities beyond the currently available 80386 and 68020 processor technology. Because of the need for faster processors and more memory, 90 percent of the polled automation projects have reduced or will reduce the scope of their implementation capabilities. The requirements are presented with respect to their targeted environment, identifying the applications required, system performance levels necessary to support them, and the degree to which they are met with typical programmatic constraints. Volume one includes the survey and results. Volume two contains the appendixes.
Computational needs survey of NASA automation and robotics missions. Volume 2: Appendixes
NASA Technical Reports Server (NTRS)
Davis, Gloria J.
1991-01-01
NASA's operational use of advanced processor technology in space systems lags behind its commercial development by more than eight years. One of the factors contributing to this is the fact that mission computing requirements are frequency unknown, unstated, misrepresented, or simply not available in a timely manner. NASA must provide clear common requirements to make better use of available technology, to cut development lead time on deployable architectures, and to increase the utilization of new technology. Here, NASA, industry and academic communities are provided with a preliminary set of advanced mission computational processing requirements of automation and robotics (A and R) systems. The results were obtained in an assessment of the computational needs of current projects throughout NASA. The high percent of responses indicated a general need for enhanced computational capabilities beyond the currently available 80386 and 68020 processor technology. Because of the need for faster processors and more memory, 90 percent of the polled automation projects have reduced or will reduce the scope of their implemented capabilities. The requirements are presented with respect to their targeted environment, identifying the applications required, system performance levels necessary to support them, and the degree to which they are met with typical programmatic constraints. Here, appendixes are provided.
Integrated Component-based Data Acquisition Systems for Aerospace Test Facilities
NASA Technical Reports Server (NTRS)
Ross, Richard W.
2001-01-01
The Multi-Instrument Integrated Data Acquisition System (MIIDAS), developed by the NASA Langley Research Center, uses commercial off the shelf (COTS) products, integrated with custom software, to provide a broad range of capabilities at a low cost throughout the system s entire life cycle. MIIDAS combines data acquisition capabilities with online and post-test data reduction computations. COTS products lower purchase and maintenance costs by reducing the level of effort required to meet system requirements. Object-oriented methods are used to enhance modularity, encourage reusability, and to promote adaptability, reducing software development costs. Using only COTS products and custom software supported on multiple platforms reduces the cost of porting the system to other platforms. The post-test data reduction capabilities of MIIDAS have been installed at four aerospace testing facilities at NASA Langley Research Center. The systems installed at these facilities provide a common user interface, reducing the training time required for personnel that work across multiple facilities. The techniques employed by MIIDAS enable NASA to build a system with a lower initial purchase price and reduced sustaining maintenance costs. With MIIDAS, NASA has built a highly flexible next generation data acquisition and reduction system for aerospace test facilities that meets customer expectations.
Space station microscopy: Beyond the box
NASA Technical Reports Server (NTRS)
Hunter, N. R.; Pierson, Duane L.; Mishra, S. K.
1993-01-01
Microscopy aboard Space Station Freedom poses many unique challenges for in-flight investigations. Disciplines such as material processing, plant and animal research, human reseach, enviromental monitoring, health care, and biological processing have diverse microscope requirements. The typical microscope not only does not meet the comprehensive needs of these varied users, but also tends to require excessive crew time. To assess user requirements, a comprehensive survey was conducted among investigators with experiments requiring microscopy. The survey examined requirements such as light sources, objectives, stages, focusing systems, eye pieces, video accessories, etc. The results of this survey and the application of an Intelligent Microscope Imaging System (IMIS) may address these demands for efficient microscopy service in space. The proposed IMIS can accommodate multiple users with varied requirements, operate in several modes, reduce crew time needed for experiments, and take maximum advantage of the restrictive data/ instruction transmission environment on Freedom.
A Parallel Pipelined Renderer for the Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Chiueh, Tzi-Cker; Ma, Kwan-Liu
1997-01-01
This paper presents a strategy for efficiently rendering time-varying volume data sets on a distributed-memory parallel computer. Time-varying volume data take large storage space and visualizing them requires reading large files continuously or periodically throughout the course of the visualization process. Instead of using all the processors to collectively render one volume at a time, a pipelined rendering process is formed by partitioning processors into groups to render multiple volumes concurrently. In this way, the overall rendering time may be greatly reduced because the pipelined rendering tasks are overlapped with the I/O required to load each volume into a group of processors; moreover, parallelization overhead may be reduced as a result of partitioning the processors. We modify an existing parallel volume renderer to exploit various levels of rendering parallelism and to study how the partitioning of processors may lead to optimal rendering performance. Two factors which are important to the overall execution time are re-source utilization efficiency and pipeline startup latency. The optimal partitioning configuration is the one that balances these two factors. Tests on Intel Paragon computers show that in general optimal partitionings do exist for a given rendering task and result in 40-50% saving in overall rendering time.
POD/DEIM reduced-order strategies for efficient four dimensional variational data assimilation
NASA Astrophysics Data System (ADS)
Ştefănescu, R.; Sandu, A.; Navon, I. M.
2015-08-01
This work studies reduced order modeling (ROM) approaches to speed up the solution of variational data assimilation problems with large scale nonlinear dynamical models. It is shown that a key requirement for a successful reduced order solution is that reduced order Karush-Kuhn-Tucker conditions accurately represent their full order counterparts. In particular, accurate reduced order approximations are needed for the forward and adjoint dynamical models, as well as for the reduced gradient. New strategies to construct reduced order based are developed for proper orthogonal decomposition (POD) ROM data assimilation using both Galerkin and Petrov-Galerkin projections. For the first time POD, tensorial POD, and discrete empirical interpolation method (DEIM) are employed to develop reduced data assimilation systems for a geophysical flow model, namely, the two dimensional shallow water equations. Numerical experiments confirm the theoretical framework for Galerkin projection. In the case of Petrov-Galerkin projection, stabilization strategies must be considered for the reduced order models. The new reduced order shallow water data assimilation system provides analyses similar to those produced by the full resolution data assimilation system in one tenth of the computational time.
Time Integrating Optical Signal Processing
1981-07-01
advantage of greatly reducing the bandwidth requirement for the memory feeding the second cell. For a system composed of a PbMoO 4 and a ( TeO2 )s Bragg cell...bounds. ( TeO2 )L and ( TeO2 )s represent, respectively, the long- / , / itudinal and slow shear / modes of TeO2 . ’a , / / /a ’o [ / / / / was assumed here...could be implemented with a 25mm TeO2 device operated in the longitudinal mode in a hybrid system. A purely time-integrating system would require about
Code of Federal Regulations, 2012 CFR
2012-04-01
... modify the INA grantee's plan to add funds or, if required by Congressional action, to reduce the amount... add, expand, delete, or diminish any service allowable under the regulations in this part. The INA... event that further clarification or modification is required, we may extend the thirty (30) day time...
Code of Federal Regulations, 2014 CFR
2014-04-01
... modify the INA grantee's plan to add funds or, if required by Congressional action, to reduce the amount... add, expand, delete, or diminish any service allowable under the regulations in this part. The INA... event that further clarification or modification is required, we may extend the thirty (30) day time...
Code of Federal Regulations, 2013 CFR
2013-04-01
... modify the INA grantee's plan to add funds or, if required by Congressional action, to reduce the amount... add, expand, delete, or diminish any service allowable under the regulations in this part. The INA... event that further clarification or modification is required, we may extend the thirty (30) day time...
Training and Required Reading Management Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Jerel
2009-08-13
This tool manages training and required reading for groups, facilities, etc âÃÂàabilities beyond the site training systems. TRRMTool imports training data from controlled site data sources/systems and provides greater management and reporting. Clients have been able to greatly reduce the time and effort required to manage training, have greater accuracy, foster individual accountability, and be proactive in verifying training of support personnel, to maintain compliance.
Cardiovascular transition at birth: a physiological sequence.
Hooper, Stuart B; Te Pas, Arjan B; Lang, Justin; van Vonderen, Jeroen J; Roehr, Charles Christoph; Kluckow, Martin; Gill, Andrew W; Wallace, Euan M; Polglase, Graeme R
2015-05-01
The transition to newborn life at birth involves major cardiovascular changes that are triggered by lung aeration. These include a large increase in pulmonary blood flow (PBF), which is required for pulmonary gas exchange and to replace umbilical venous return as the source of preload for the left heart. Clamping the umbilical cord before PBF increases reduces venous return and preload for the left heart and thereby reduces cardiac output. Thus, if ventilation onset is delayed following cord clamping, the infant is at risk of superimposing an ischemic insult, due to low cardiac output, on top of an asphyxic insult. Much debate has centered on the timing of cord clamping at birth, focusing mainly on the potential for a time-dependent placental to infant blood transfusion. This has prompted recommendations for delayed cord clamping for a set time after birth in infants not requiring resuscitation. However, recent evidence indicates that ventilation onset before cord clamping mitigates the adverse cardiovascular consequences caused by immediate cord clamping. This indicates that the timing of cord clamping should be based on the infant's physiology rather than an arbitrary period of time and that delayed cord clamping may be of greatest benefit to apneic infants.
NASA Astrophysics Data System (ADS)
Farag, Mohammed; Fleckenstein, Matthias; Habibi, Saeid
2017-02-01
Model-order reduction and minimization of the CPU run-time while maintaining the model accuracy are critical requirements for real-time implementation of lithium-ion electrochemical battery models. In this paper, an isothermal, continuous, piecewise-linear, electrode-average model is developed by using an optimal knot placement technique. The proposed model reduces the univariate nonlinear function of the electrode's open circuit potential dependence on the state of charge to continuous piecewise regions. The parameterization experiments were chosen to provide a trade-off between extensive experimental characterization techniques and purely identifying all parameters using optimization techniques. The model is then parameterized in each continuous, piecewise-linear, region. Applying the proposed technique cuts down the CPU run-time by around 20%, compared to the reduced-order, electrode-average model. Finally, the model validation against real-time driving profiles (FTP-72, WLTP) demonstrates the ability of the model to predict the cell voltage accurately with less than 2% error.
Automated Tendering and Purchasing.
ERIC Educational Resources Information Center
DeZorzi, James M.
1980-01-01
The Middlesex County Board of Education in Hyde Park (Ontario) has developed an automated tendering/purchasing system for ordering standard items that has reduced by 80 percent the time required for tendering, evaluating, awarding, and ordering items. (Author/MLF)
Adaptive Information Dissemination Control to Provide Diffdelay for the Internet of Things.
Liu, Xiao; Liu, Anfeng; Huang, Changqin
2017-01-12
Applications running on the Internet of Things, such as the Wireless Sensor and Actuator Networks (WSANs) platform, generally have different quality of service (QoS) requirements. For urgent events, it is crucial that information be reported to the actuator quickly, and the communication cost is the second factor. However, for interesting events, communication costs, network lifetime and time all become important factors. In most situations, these different requirements cannot be satisfied simultaneously. In this paper, an adaptive communication control based on a differentiated delay (ACCDS) scheme is proposed to resolve this conflict. In an ACCDS, source nodes of events adaptively send various searching actuators routings (SARs) based on the degree of sensitivity to delay while maintaining the network lifetime. For a delay-sensitive event, the source node sends a large number of SARs to actuators to identify and inform the actuators in an extremely short time; thus, action can be taken quickly but at higher communication costs. For delay-insensitive events, the source node sends fewer SARs to reduce communication costs and improve network lifetime. Therefore, an ACCDS can meet the QoS requirements of different events using a differentiated delay framework. Theoretical analysis simulation results indicate that an ACCDS provides delay and communication costs and differentiated services; an ACCDS scheme can reduce the network delay by 11.111%-53.684% for a delay-sensitive event and reduce the communication costs by 5%-22.308% for interesting events, and reduce the network lifetime by about 28.713%.
Adaptive Information Dissemination Control to Provide Diffdelay for the Internet of Things
Liu, Xiao; Liu, Anfeng; Huang, Changqin
2017-01-01
Applications running on the Internet of Things, such as the Wireless Sensor and Actuator Networks (WSANs) platform, generally have different quality of service (QoS) requirements. For urgent events, it is crucial that information be reported to the actuator quickly, and the communication cost is the second factor. However, for interesting events, communication costs, network lifetime and time all become important factors. In most situations, these different requirements cannot be satisfied simultaneously. In this paper, an adaptive communication control based on a differentiated delay (ACCDS) scheme is proposed to resolve this conflict. In an ACCDS, source nodes of events adaptively send various searching actuators routings (SARs) based on the degree of sensitivity to delay while maintaining the network lifetime. For a delay-sensitive event, the source node sends a large number of SARs to actuators to identify and inform the actuators in an extremely short time; thus, action can be taken quickly but at higher communication costs. For delay-insensitive events, the source node sends fewer SARs to reduce communication costs and improve network lifetime. Therefore, an ACCDS can meet the QoS requirements of different events using a differentiated delay framework. Theoretical analysis simulation results indicate that an ACCDS provides delay and communication costs and differentiated services; an ACCDS scheme can reduce the network delay by 11.111%–53.684% for a delay-sensitive event and reduce the communication costs by 5%–22.308% for interesting events, and reduce the network lifetime by about 28.713%. PMID:28085097
Accelerated Comparative Fatigue Strength Testing of Belt Adhesive Joints
NASA Astrophysics Data System (ADS)
Bajda, Miroslaw; Blazej, Ryszard; Jurdziak, Leszek
2017-12-01
Belt joints are the weakest link in the serial structure that creates an endless loop of spliced belt segments. This affects not only the lower strength of adhesive joints of textile belts in comparison to vulcanized splices, but also the replacement of traditional glues to more ecological but with other strength parameters. This is reflected in the lowered durability of adhesive joints, which in underground coal mines is nearly twice shorter than the operating time of belts. Vulcanized splices require high precision in performance, they need long time to achieve cross-linking of the friction mixture and, above all, they require specialized equipment (vulcanization press) which is not readily available and often takes much time to be delivered down, which means reduced mining output or even downtime. All this reduces the reliability and durability of adhesive joints. In addition, due to the consolidation on the Polish coal market, mines are joined into large economic units serviced by a smaller number of processing plants. The consequence is to extend the transport routes downstream and increase reliability requirements. The greater number of conveyors in the chain reduces reliability of supply and increases production losses. With high fixed costs of underground mines, the reduction in mining output is reflected in the increase in unit costs, and this at low coal prices on the market can mean substantial losses for mines. The paper describes the comparative study of fatigue strength of shortened samples of adhesive joints conducted to compare many different variants of joints (various adhesives and materials). Shortened samples were exposed to accelerated fatigue in the usually long-lasting dynamic studies, allowing more variants to be tested at the same time. High correlation between the results obtained for shortened (100 mm) and traditional full-length (3×250 mm) samples renders accelerated tests possible.
Kelly, Elizabeth W; Kelly, Jonathan D; Hiestand, Brian; Wells-Kiser, Kathy; Starling, Stephanie; Hoekstra, James W
2010-01-01
Rapid reperfusion in patients with ST-elevation myocardial infarction (STEMI) is associated with lower mortality. Reduction in door-to-balloon (D2B) time for percutaneous coronary intervention requires multidisciplinary cooperation, process analysis, and quality improvement methodology. Six Sigma methodology was used to reduce D2B times in STEMI patients presenting to a tertiary care center. Specific steps in STEMI care were determined, time goals were established, and processes were changed to reduce each step's duration. Outcomes were tracked, and timely feedback was given to providers. After process analysis and implementation of improvements, mean D2B times decreased from 128 to 90 minutes. Improvement has been sustained; as of June 2010, the mean D2B was 56 minutes, with 100% of patients meeting the 90-minute window for the year. Six Sigma methodology and immediate provider feedback result in significant reductions in D2B times. The lessons learned may be extrapolated to other primary percutaneous coronary intervention centers. Copyright © 2010 Elsevier Inc. All rights reserved.
Effects of G, a Growth Regulator from Eucalyptus grandis, on Photosynthesis
Sharkey, Thomas D.; Stevenson, Gay F.; Paton, Dugald M.
1982-01-01
A growth regulator (G; 4-ethyl-1-hydroxy-4,8,8,10,10 pentamethyl-7,9-dioxo-2,3 dioxyabicyclo (4.4.0) decene-5) from Eucalyptus grandis (Maiden) reduced stomatal conductance and also photosynthetic capacity when fed through the transpiration stream of detached leaves. The concentration of G required for this effect was high (10−4 molar), but the amount of G taken up (dose) was below the level which has previously been found in E. grandis leaves. Similar effects were observed in detached leaves of Xanthium strumarium L. though almost 10 times more G was required. G reduced CO2-dependent O2 evolution from isolated cells of X. strumarium. In spinach (Spinacia oleracea L.) chloroplasts, electron transport through photosystem II was reduced by G. It is proposed that G affects stomatal conductance and photosynthesis by reducing photosystem II activity in both the guard cell chloroplasts and mesophyll cell chloroplasts. PMID:16662322
Detomidine reduces isoflurane anesthetic requirement (MAC) in horses.
Steffey, Eugene P; Pascoe, Peter J
2002-10-01
To quantitate the dose- and time-related magnitude of the anesthetic sparing effect of, and selected physiological responses to detomidine during isoflurane anesthesia in horses. Randomized cross-over study. Three, healthy, young adult horses weighing 485 ± 14 kg. Horses were anesthetized on two occasions to determine the minimum alveolar concentration (MAC) of isoflurane in O 2 and then to measure the anesthetic sparing effect (time-related MAC reduction) following IV detomidine (0.03 and 0.06 mg kg -1 ). Selected common measures of cardiopulmonary function, blood glucose and urinary output were also recorded. Isoflurane MAC was 1.44 ± 0.07% (mean ± SEM). This was reduced by 42.8 ± 5.4% and 44.8 ± 3.0% at 83 ± 23 and 125 ± 36 minutes, respectively, following 0.03 and 0.06 mg kg -1 , detomidine. The MAC reduction was detomidine dose- and time-dependent. There was a tendency for mild cardiovascular and respiratory depression, especially following the higher detomidine dose. Detomidine increased both blood glucose and urine flow; the magnitude of these changes was time- and dose-dependent CONCLUSIONS: Detomidine reduces anesthetic requirement for isoflurane and increases blood glucose concentration and urine flow in horses. These changes were dose- and time-related. The results imply potent anesthetic sparing actions by detomidine. The detomidine-related increased urine flow should be considered in designing anesthetic protocols for individual horses. Copyright © 2002 Association of Veterinary Anaesthetists and American College of Veterinary Anesthesia and Analgesia. Published by Elsevier Ltd. All rights reserved.
Optimizing process and equipment efficiency using integrated methods
NASA Astrophysics Data System (ADS)
D'Elia, Michael J.; Alfonso, Ted F.
1996-09-01
The semiconductor manufacturing industry is continually riding the edge of technology as it tries to push toward higher design limits. Mature fabs must cut operating costs while increasing productivity to remain profitable and cannot justify large capital expenditures to improve productivity. Thus, they must push current tool production capabilities to cut manufacturing costs and remain viable. Working to continuously improve mature production methods requires innovation. Furthermore, testing and successful implementation of these ideas into modern production environments require both supporting technical data and commitment from those working with the process daily. At AMD, natural work groups (NWGs) composed of operators, technicians, engineers, and supervisors collaborate to foster innovative thinking and secure commitment. Recently, an AMD NWG improved equipment cycle time on the Genus tungsten silicide (WSi) deposition system. The team used total productive manufacturing (TPM) to identify areas for process improvement. Improved in-line equipment monitoring was achieved by constructing a real time overall equipment effectiveness (OEE) calculator which tracked equipment down, idle, qualification, and production times. In-line monitoring results indicated that qualification time associated with slow Inspex turn-around time and machine downtime associated with manual cleans contributed greatly to reduced availability. Qualification time was reduced by 75% by implementing a new Inspex monitor pre-staging technique. Downtime associated with manual cleans was reduced by implementing an in-situ plasma etch back to extend the time between manual cleans. A designed experiment was used to optimize the process. Time between 18 hour manual cleans has been improved from every 250 to every 1500 cycles. Moreover defect density realized a 3X improvement. Overall, the team achieved a 35% increase in tool availability. This paper details the above strategies and accomplishments.
Convergence Acceleration and Documentation of CFD Codes for Turbomachinery Applications
NASA Technical Reports Server (NTRS)
Marquart, Jed E.
2005-01-01
The development and analysis of turbomachinery components for industrial and aerospace applications has been greatly enhanced in recent years through the advent of computational fluid dynamics (CFD) codes and techniques. Although the use of this technology has greatly reduced the time required to perform analysis and design, there still remains much room for improvement in the process. In particular, there is a steep learning curve associated with most turbomachinery CFD codes, and the computation times need to be reduced in order to facilitate their integration into standard work processes. Two turbomachinery codes have recently been developed by Dr. Daniel Dorney (MSFC) and Dr. Douglas Sondak (Boston University). These codes are entitled Aardvark (for 2-D and quasi 3-D simulations) and Phantom (for 3-D simulations). The codes utilize the General Equation Set (GES), structured grid methodology, and overset O- and H-grids. The codes have been used with success by Drs. Dorney and Sondak, as well as others within the turbomachinery community, to analyze engine components and other geometries. One of the primary objectives of this study was to establish a set of parametric input values which will enhance convergence rates for steady state simulations, as well as reduce the runtime required for unsteady cases. The goal is to reduce the turnaround time for CFD simulations, thus permitting more design parametrics to be run within a given time period. In addition, other code enhancements to reduce runtimes were investigated and implemented. The other primary goal of the study was to develop enhanced users manuals for Aardvark and Phantom. These manuals are intended to answer most questions for new users, as well as provide valuable detailed information for the experienced user. The existence of detailed user s manuals will enable new users to become proficient with the codes, as well as reducing the dependency of new users on the code authors. In order to achieve the objectives listed, the following tasks were accomplished: 1) Parametric Study Of Preconditioning Parameters And Other Code Inputs; 2) Code Modifications To Reduce Runtimes; 3) Investigation Of Compiler Options To Reduce Code Runtime; and 4) Development/Enhancement of Users Manuals for Aardvark and Phantom
Knowledge base rule partitioning design for CLIPS
NASA Technical Reports Server (NTRS)
Mainardi, Joseph D.; Szatkowski, G. P.
1990-01-01
This describes a knowledge base (KB) partitioning approach to solve the problem of real-time performance using the CLIPS AI shell when containing large numbers of rules and facts. This work is funded under the joint USAF/NASA Advanced Launch System (ALS) Program as applied research in expert systems to perform vehicle checkout for real-time controller and diagnostic monitoring tasks. The Expert System advanced development project (ADP-2302) main objective is to provide robust systems responding to new data frames of 0.1 to 1.0 second intervals. The intelligent system control must be performed within the specified real-time window, in order to meet the demands of the given application. Partitioning the KB reduces the complexity of the inferencing Rete net at any given time. This reduced complexity improves performance but without undo impacts during load and unload cycles. The second objective is to produce highly reliable intelligent systems. This requires simple and automated approaches to the KB verification & validation task. Partitioning the KB reduces rule interaction complexity overall. Reduced interaction simplifies the V&V testing necessary by focusing attention only on individual areas of interest. Many systems require a robustness that involves a large number of rules, most of which are mutually exclusive under different phases or conditions. The ideal solution is to control the knowledge base by loading rules that directly apply for that condition, while stripping out all rules and facts that are not used during that cycle. The practical approach is to cluster rules and facts into associated 'blocks'. A simple approach has been designed to control the addition and deletion of 'blocks' of rules and facts, while allowing real-time operations to run freely. Timing tests for real-time performance for specific machines under R/T operating systems have not been completed but are planned as part of the analysis process to validate the design.
Temporal and spatial binning of TCSPC data to improve signal-to-noise ratio and imaging speed
NASA Astrophysics Data System (ADS)
Walsh, Alex J.; Beier, Hope T.
2016-03-01
Time-correlated single photon counting (TCSPC) is the most robust method for fluorescence lifetime imaging using laser scanning microscopes. However, TCSPC is inherently slow making it ineffective to capture rapid events due to the single photon product per laser pulse causing extensive acquisition time limitations and the requirement of low fluorescence emission efficiency to avoid bias of measurement towards short lifetimes. Furthermore, thousands of photons per pixel are required for traditional instrument response deconvolution and fluorescence lifetime exponential decay estimation. Instrument response deconvolution and fluorescence exponential decay estimation can be performed in several ways including iterative least squares minimization and Laguerre deconvolution. This paper compares the limitations and accuracy of these fluorescence decay analysis techniques to accurately estimate double exponential decays across many data characteristics including various lifetime values, lifetime component weights, signal-to-noise ratios, and number of photons detected. Furthermore, techniques to improve data fitting, including binning data temporally and spatially, are evaluated as methods to improve decay fits and reduce image acquisition time. Simulation results demonstrate that binning temporally to 36 or 42 time bins, improves accuracy of fits for low photon count data. Such a technique reduces the required number of photons for accurate component estimation if lifetime values are known, such as for commercial fluorescent dyes and FRET experiments, and improve imaging speed 10-fold.
Forward collision warning based on kernelized correlation filters
NASA Astrophysics Data System (ADS)
Pu, Jinchuan; Liu, Jun; Zhao, Yong
2017-07-01
A vehicle detection and tracking system is one of the indispensable methods to reduce the occurrence of traffic accidents. The nearest vehicle is the most likely to cause harm to us. So, this paper will do more research on about the nearest vehicle in the region of interest (ROI). For this system, high accuracy, real-time and intelligence are the basic requirement. In this paper, we set up a system that combines the advanced KCF tracking algorithm with the HaarAdaBoost detection algorithm. The KCF algorithm reduces computation time and increase the speed through the cyclic shift and diagonalization. This algorithm satisfies the real-time requirement. At the same time, Haar features also have the same advantage of simple operation and high speed for detection. The combination of this two algorithm contribute to an obvious improvement of the system running rate comparing with previous works. The detection result of the HaarAdaBoost classifier provides the initial value for the KCF algorithm. This fact optimizes KCF algorithm flaws that manual car marking in the initial phase, which is more scientific and more intelligent. Haar detection and KCF tracking with Histogram of Oriented Gradient (HOG) ensures the accuracy of the system. We evaluate the performance of framework on dataset that were self-collected. The experimental results demonstrate that the proposed method is robust and real-time. The algorithm can effectively adapt to illumination variation, even in the night it can meet the detection and tracking requirements, which is an improvement compared with the previous work.
USDA-ARS?s Scientific Manuscript database
Time-temperature control of fresh-cut produce at 41 °F (5 ºC) or less can significantly reduce the growth of human pathogens. Since 2009, the FDA Food Code has required that packaged ready-to-eat leafy greens be kept at 41 °F (5 ºC) or lower to minimize the potential of pathogen proliferation in the...
Reducing the content of alloying elements in high-speed steel during heating in salt baths
NASA Astrophysics Data System (ADS)
Kandalovskii, I. P.; Kirillov, F. F.; Dobler, V. I.
1985-07-01
A decrease in molebdenum content occurs in the surface layers during the quench heating of a tool formed from high-speed tungsten-molybdenum steel in a barium chloride salt bath after the required heating time, while a decrease in the tungsten content takes place with more prolonged hold times.
USDA-ARS?s Scientific Manuscript database
Plant disease management decision aids typically require inputs of weather elements such as air temperature. Whereas many disease models are created based on weather elements at the crop canopy, and with relatively fine time resolution, the decision aids commonly are implemented with hourly weather...
Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data
NASA Technical Reports Server (NTRS)
Lalime, Aimee L.; Johnson, Marty E.; Rizzi, Stephen A. (Technical Monitor)
2002-01-01
Binaural or "virtual acoustic" representation has been proposed as a method of analyzing acoustic and vibroacoustic data. Unfortunately, this binaural representation can require extensive computer power to apply the Head Related Transfer Functions (HRTFs) to a large number of sources, as with a vibrating structure. This work focuses on reducing the number of real-time computations required in this binaural analysis through the use of Singular Value Decomposition (SVD) and Equivalent Source Reduction (ESR). The SVD method reduces the complexity of the HRTF computations by breaking the HRTFs into dominant singular values (and vectors). The ESR method reduces the number of sources to be analyzed in real-time computation by replacing sources on the scale of a structural wavelength with sources on the scale of an acoustic wavelength. It is shown that the effectiveness of the SVD and ESR methods improves as the complexity of the source increases. In addition, preliminary auralization tests have shown that the results from both the SVD and ESR methods are indistinguishable from the results found with the exhaustive method.
Simulation and analysis of support hardware for multiple instruction rollback
NASA Technical Reports Server (NTRS)
Alewine, Neil J.
1992-01-01
Recently, a compiler-assisted approach to multiple instruction retry was developed. In this scheme, a read buffer of size 2N, where N represents the maximum instruction rollback distance, is used to resolve one type of data hazard. This hardware support helps to reduce code growth, compilation time, and some of the performance impacts associated with hazard resolution. The 2N read buffer size requirement of the compiler-assisted approach is worst case, assuring data redundancy for all data required but also providing some unnecessary redundancy. By adding extra bits in the operand field for source 1 and source 2 it becomes possible to design the read buffer to save only those values required, thus reducing the read buffer size requirement. This study measures the effect on performance of a DECstation 3100 running 10 application programs using 6 read buffer configurations at varying read buffer sizes.
Sonko, Bakary J; Miller, Leland V; Jones, Richard H; Donnelly, Joseph E; Jacobsen, Dennis J; Hill, James O; Fennessey, Paul V
2003-12-15
Reducing water to hydrogen gas by zinc or uranium metal for determining D/H ratio is both tedious and time consuming. This has forced most energy metabolism investigators to use the "two-point" technique instead of the "Multi-point" technique for estimating total energy expenditure (TEE). Recently, we purchased a new platinum (Pt)-equilibration system that significantly reduces both time and labor required for D/H ratio determination. In this study, we compared TEE obtained from nine overweight but healthy subjects, estimated using the traditional Zn-reduction method to that obtained from the new Pt-equilibration system. Rate constants, pool spaces, and CO2 production rates obtained from use of the two methodologies were not significantly different. Correlation analysis demonstrated that TEEs estimated using the two methods were significantly correlated (r=0.925, p=0.0001). Sample equilibration time was reduced by 66% compared to those of similar methods. The data demonstrated that the Zn-reduction method could be replaced by the Pt-equilibration method when TEE was estimated using the "Multi-Point" technique. Furthermore, D equilibration time was significantly reduced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, Adam J., E-mail: adamhoff@umich.edu; Lee, John C., E-mail: jcl@umich.edu
2016-02-15
A new time-dependent Method of Characteristics (MOC) formulation for nuclear reactor kinetics was developed utilizing angular flux time-derivative propagation. This method avoids the requirement of storing the angular flux at previous points in time to represent a discretized time derivative; instead, an equation for the angular flux time derivative along 1D spatial characteristics is derived and solved concurrently with the 1D transport characteristic equation. This approach allows the angular flux time derivative to be recast principally in terms of the neutron source time derivatives, which are approximated to high-order accuracy using the backward differentiation formula (BDF). This approach, called Sourcemore » Derivative Propagation (SDP), drastically reduces the memory requirements of time-dependent MOC relative to methods that require storing the angular flux. An SDP method was developed for 2D and 3D applications and implemented in the computer code DeCART in 2D. DeCART was used to model two reactor transient benchmarks: a modified TWIGL problem and a C5G7 transient. The SDP method accurately and efficiently replicated the solution of the conventional time-dependent MOC method using two orders of magnitude less memory.« less
Differences in care burden of patients undergoing dialysis in different centres in the netherlands.
de Kleijn, Ria; Uyl-de Groot, Carin; Hagen, Chris; Diepenbroek, Adry; Pasker-de Jong, Pieternel; Ter Wee, Piet
2017-06-01
A classification model was developed to simplify planning of personnel at dialysis centres. This model predicted the care burden based on dialysis characteristics. However, patient characteristics and different dialysis centre categories might also influence the amount of care time required. To determine if there is a difference in care burden between different categories of dialysis centres and if specific patient characteristics predict nursing time needed for patient treatment. An observational study. Two hundred and forty-two patients from 12 dialysis centres. In 12 dialysis centres, nurses filled out the classification list per patient and completed a form with patient characteristics. Nephrologists filled out the Charlson Comorbidity Index. Independent observers clocked the time nurses spent on separate steps of the dialysis for each patient. Dialysis centres were categorised into four types. Data were analysed using regression models. In contrast to other dialysis centres, academic centres needed 14 minutes more care time per patient per dialysis treatment than predicted in the classification model. No patient characteristics were found that influenced this difference. The only patient characteristic that predicted the time required was gender, with more time required to treat women. Gender did not affect the difference between measured and predicted care time. Differences in care burden were observed between academic and other centres, with more time required for treatment in academic centres. Contribution of patient characteristics to the time difference was minimal. The only patient characteristics that predicted care time were previous transplantation, which reduced the time required, and gender, with women requiring more care time. © 2017 European Dialysis and Transplant Nurses Association/European Renal Care Association.
Epicenter location by analysis for interictal spikes
NASA Technical Reports Server (NTRS)
Hand, C.
2001-01-01
The MEG recording is a quick and painless process that requires no surgery. This approach has the potential to save time, reduce patient discomfort, and eliminates a painful and potentially dangerous surgical step in the treatment procedure.
Wolfs, Vincent; Villazon, Mauricio Florencio; Willems, Patrick
2013-01-01
Applications such as real-time control, uncertainty analysis and optimization require an extensive number of model iterations. Full hydrodynamic sewer models are not sufficient for these applications due to the excessive computation time. Simplifications are consequently required. A lumped conceptual modelling approach results in a much faster calculation. The process of identifying and calibrating the conceptual model structure could, however, be time-consuming. Moreover, many conceptual models lack accuracy, or do not account for backwater effects. To overcome these problems, a modelling methodology was developed which is suited for semi-automatic calibration. The methodology is tested for the sewer system of the city of Geel in the Grote Nete river basin in Belgium, using both synthetic design storm events and long time series of rainfall input. A MATLAB/Simulink(®) tool was developed to guide the modeller through the step-wise model construction, reducing significantly the time required for the conceptual modelling process.
Research and application of embedded real-time operating system
NASA Astrophysics Data System (ADS)
Zhang, Bo
2013-03-01
In this paper, based on the analysis of existing embedded real-time operating system, the architecture of an operating system is designed and implemented. The experimental results show that the design fully complies with the requirements of embedded real-time operating system, can achieve the purposes of reducing the complexity of embedded software design and improving the maintainability, reliability, flexibility. Therefore, this design program has high practical value.
Water induced sediment levitation enhances downslope transport on Mars.
Raack, Jan; Conway, Susan J; Herny, Clémence; Balme, Matthew R; Carpy, Sabrina; Patel, Manish R
2017-10-27
On Mars, locally warm surface temperatures (~293 K) occur, leading to the possibility of (transient) liquid water on the surface. However, water exposed to the martian atmosphere will boil, and the sediment transport capacity of such unstable water is not well understood. Here, we present laboratory studies of a newly recognized transport mechanism: "levitation" of saturated sediment bodies on a cushion of vapor released by boiling. Sediment transport where this mechanism is active is about nine times greater than without this effect, reducing the amount of water required to transport comparable sediment volumes by nearly an order of magnitude. Our calculations show that the effect of levitation could persist up to ~48 times longer under reduced martian gravity. Sediment levitation must therefore be considered when evaluating the formation of recent and present-day martian mass wasting features, as much less water may be required to form such features than previously thought.
A functional description of the Buffered Telemetry Demodulator (BTD)
NASA Technical Reports Server (NTRS)
Tsou, H.; Shah, B.; Lee, R.; Hinedi, S.
1993-01-01
This article gives a functional description of the buffered telemetry demodulator (BTD), which operates on recorded digital samples to extract the symbols from the received signal. The key advantages of the BTD are as follows: (1) its ability to reprocess the signal to reduce acquisition time; (2) its ability to use future information about the signal and to perform smoothing on past samples; and (3) its minimum transmission bandwidth requirement as each sub carrier harmonic is processed individually. The first application of the BTD would be the Galileo S-band contingency mission, where the signal is so weak that reprocessing to reduce the acquisition time is crucial. Moreover, in the event of employing antenna arraying with full spectrum combining, only the sub carrier harmonics need to be transmitted between sites, resulting in significant reduction in data rate transmission requirements. Software implementation of the BTD is described for various general-purpose computers.
Cleft Lip Repair, Nasoalveolar Molding, and Primary Cleft Rhinoplasty.
Bhuskute, Aditi A; Tollefson, Travis T
2016-11-01
Cleft lip and palate are the fourth most common congenital birth defect. Management requires multidisciplinary care owing to the complexity of these clefts on midface growth, dentition, Eustachian tube function, and lip and nasal cosmesis. Repair requires planning, but can be performed systematically to reduce variability of outcomes. The use of primary rhinoplasty at the time of cleft lip repair can improve nose symmetry and reduce nasal deformity. Use of nasoalveolar molding ranging from lip taping to the use of preoperative infant orthopedics has played an important role in improving functional and cosmetic results of cleft lip repair. Copyright © 2016 Elsevier Inc. All rights reserved.
Hilsabeck, T. J.; Frenje, J. A.; Hares, J. D.; ...
2016-08-02
Here we present a time-resolved detector concept for the magnetic recoil spectrometer for time-resolved measurements of the NIF neutron spectrum. The measurement is challenging due to the time spreading of the recoil protons (or deuterons) as they transit an energy dispersing magnet system. Ions arrive at the focal plane of the magnetic spectrometer over an interval of tens of nanoseconds. We seek to measure the time-resolved neutron spectrum with 20 ps precision by manipulating an electron signal derived from the ions. A stretch-compress scheme is employed to remove transit time skewing while simultaneously reducing the bandwidth requirements for signal recording.more » Simulation results are presented along with design concepts for structures capable of establishing the required electromagnetic fields.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hilsabeck, T. J.; Frenje, J. A.; Hares, J. D.
Here we present a time-resolved detector concept for the magnetic recoil spectrometer for time-resolved measurements of the NIF neutron spectrum. The measurement is challenging due to the time spreading of the recoil protons (or deuterons) as they transit an energy dispersing magnet system. Ions arrive at the focal plane of the magnetic spectrometer over an interval of tens of nanoseconds. We seek to measure the time-resolved neutron spectrum with 20 ps precision by manipulating an electron signal derived from the ions. A stretch-compress scheme is employed to remove transit time skewing while simultaneously reducing the bandwidth requirements for signal recording.more » Simulation results are presented along with design concepts for structures capable of establishing the required electromagnetic fields.« less
Code of Federal Regulations, 2010 CFR
2010-04-01
... the INA grantee's plan to add funds or, if required by Congressional action, to reduce the amount of funds available for expenditure. (b) The INA grantee may request approval to modify its plan to add... event that further clarification or modification is required, we may extend the thirty (30) day time...
Code of Federal Regulations, 2011 CFR
2011-04-01
... the INA grantee's plan to add funds or, if required by Congressional action, to reduce the amount of funds available for expenditure. (b) The INA grantee may request approval to modify its plan to add... event that further clarification or modification is required, we may extend the thirty (30) day time...
Morita, Yasuyuki; Yamashita, Takahiro; Toku, Toku; Ju, Yang
2018-01-01
There is a need for efficient stem cell-to-tenocyte differentiation techniques for tendon tissue engineering. More than 1 week is required for tenogenic differentiation with chemical stimuli, including co-culturing. Research has begun to examine the utility of mechanical stimuli, which reduces the differentiation time to several days. However, the precise length of time required to differentiate human bone marrow-derived mesenchymal stem cells (hBMSCs) into tenocytes has not been clarified. Understanding the precise time required is important for future tissue engineering projects. Therefore, in this study, a method was developed to more precisely determine the length of time required to differentiate hBMSCs into tenocytes with cyclic stretching stimulus. First, it had to be determined how stretching stimulation affected the cells. Microgrooved culture membranes were used to suppress cell orientation behavior. Then, only cells oriented parallel to the microgrooves were selected and evaluated for protein synthesis levels for differentiation. The results revealed that growing cells on the microgrooved membrane and selecting optimally-oriented cells for measurement improved the accuracy of the differentiation evaluation, and that hBMSCs differentiated into tenocytes in approximately 10 h. The differentiation time corresponded to the time required for cellular cytoskeleton reorganization and cellular morphology alterations. This suggests that cells, when subjected to mechanical stimulus, secrete mRNAs and proteins for both cytoskeleton reorganization and differentiation.
Manufacturing Enhancement through Reduction of Cycle Time using Different Lean Techniques
NASA Astrophysics Data System (ADS)
Suganthini Rekha, R.; Periyasamy, P.; Nallusamy, S.
2017-08-01
In recent manufacturing system the most important parameters in production line are work in process, TAKT time and line balancing. In this article lean tools and techniques were implemented to reduce the cycle time. The aim is to enhance the productivity of the water pump pipe by identifying the bottleneck stations and non value added activities. From the initial time study the bottleneck processes were identified and then necessary expanding processes were also identified for the bottleneck process. Subsequently the improvement actions have been established and implemented using different lean tools like value stream mapping, 5S and line balancing. The current state value stream mapping was developed to describe the existing status and to identify various problem areas. 5S was used to implement the steps to reduce the process cycle time and unnecessary movements of man and material. The improvement activities were implemented with required suggested and the future state value stream mapping was developed. From the results it was concluded that the total cycle time was reduced about 290.41 seconds and the customer demand has been increased about 760 units.
Effect of the conditions of sintering of sodium-reduced tantalum powders on their characteristics
NASA Astrophysics Data System (ADS)
Prokhorova, T. Yu.; Orlov, V. M.; Miroshnichenko, M. N.; Kolosov, V. N.
2014-07-01
The effect of the granulation and heat treatment of sodium-reduced tantalum powders with a specific surface area of 2.5-3.6 m2/g on the bulk density, the powder flow time, and the specific surface area of the powders and the specific capacitance of the anodes made of them is studied. It is shown that heat treatment of a granulated powder in vacuum at 1100°C or in a mixture with magnesium at 800°C makes it possible to achieve the required powder flow time.
He, Wei; Wu, Mengmeng; Huang, Shiqing; Yin, Lifang
2015-01-15
Repaglinide (RG) is an efficient antihyperglycemic drug; however, due to its short half-life, patients are required to take the marketed products several times a day, which compromises the therapeutic effects. The present study was conducted to develop a hydrophilic sustained release matrix tablet for RG with the aims of prolonging its action time, reducing the required administration times and side effects and improving patient adherence. The matrix tablets were fabricated by a direct compression method, the optimized formulation for which was obtained by screening the factors that affected the drug release. Moreover, studies of the pharmacokinetics and hypoglycemic activity as measured by glucose assay kits were performed in dogs. Sustained drug releases profiles over 10h and a reduced influence of medium pHs on release were achieved with the optimized formulation; moreover, the in vivo performance of extended release formulation was also examined, and better absorption, a one-fold decrease in Cmax, a two-fold increase of Tmax and a prolonged hypoglycemic effect compared to the marketed product were observed. In conclusion, sustained RG release and prolonged action were observed with present matrix tablets, which therefore provide a promising formulation for T2D patients who require long-term treatment. Copyright © 2014 Elsevier B.V. All rights reserved.
Steroids and statins: an old and a new anti-inflammatory strategy compared.
Vukovic, Petar M; Maravic-Stojkovic, Vera R; Peric, Miodrag S; Jovic, Miomir Dj; Cirkovic, Milan V; Gradinac, Sinisa Dj; Djukanovic, Bosko P; Milojevic, Predrag S
2011-01-01
This study compared the anti-inflammatory effects of methylprednisolone (MP) and atorvastatin and analysed their influences on clinical variables in patients undergoing coronary revascularization. Ninety patients with compromised left ventricular ejection fraction (≤30%) undergoing elective coronary surgery were equally randomized to one of three groups: statin group, treatment with atorvastatin (20 mg/day) 3 weeks before surgery; methylprednisolone group, a single shot of methylpredniosolone (10mg/kg); and control group. Postoperative IL-6 was higher in the control group when compared to the methylprednisolone and statin groups (p<0.01). IL-6 was higher in the statin-treated patients (p<0.05 versus methylprednisolone). Administration of methylprednisolone as well as statin treatment increased postoperative cardiac index, left ventricular stroke work index, decreased postoperative atrial fibrilation rate and reduced ICU stay (p<0.05 versus control). The number of patients requiring inotropic support was lower in the methylprednisolone group when compared with the other two groups (p<0.01). Tracheal intubation time was reduced in patients who received methylprednisolone (p<0.01 versus control). Preoperative administration of either methylprednisolone or atorvastatin reduced pro-inflammatory cytokine release, improved haemodynamics, decreased postoperative atrial fibrilation rate and reduced ICU stay in patients with significantly impaired cardiac function undergoing coronary revascularization. Treatment with methylprednisolone was associated with less inotropic support requirements and reduced mechanical ventilation time.
Reduction of intra-hospital transport time using the easy tube arrange device.
Joo, Ki Hyuk; Yoo, In Sool; Lee, Jinwoong; Kim, Seung Whan; Ryu, Seung; You, Yeon Ho; Cho, Yong Chul; Jeong, Woon Jun; Ahn, Byung Jun; Cho, Sung Uk
2016-06-01
Critically ill patients sometimes require transport to another location. Longer intra-hospital transport time increases the risk of hemodynamic instability and associated complications. Therefore, reducing intra-hospital transport time is critical. Our objective was to evaluate whether or not a new device the easy tube arrange device (ETAD) has the potential to reduce intra-hospital transport time of critically ill patients. We enrolled volunteers for this prospective randomized controlled study. Each participant arranged four, five, and six fluid tubings, monitoring lines, and therapeutic equipment on a cardiopulmonary resuscitation training mannequin (Resusci Anne). The time required to arrange the fluid tubings for intra-hospital transport using two different methods was evaluated. The median time to arrange four, five, and six fluid tubings was 86.00 (76.50 to 98.50), 96.00 (86.00 to 113.00), and 115.50 (93.00 to 130.75) seconds, respectively, using the conventional method and 60.50 (52.50 to 72.75), 69.00 (57.75 to 80.80), and 72.50 (64.75 to 90.50) seconds using the ETAD (all P<0.001). The total duration (for preparing the basic setting and organizing before and after the transport) was 280.00 (268.75 to 293.00), 315.50 (304.75 to 330.75), and 338.00 (319.50 to 360.25) seconds for four, five, and six fluid tubings, respectively, using the conventional method and 274.50 (261.75 to 289.25), 288.00 (271.75 to 298.25), and 301.00 (284.50 to 310.75) seconds, respectively, using the new method (P=0.024, P<0.001, and P<0.001, respectively). The ETAD was convenient to use, reduced the time to arrange medical tubings, and is expected to assist medical staff during intra-hospital transport.
Climate change and sustainable food production.
Smith, Pete; Gregory, Peter J
2013-02-01
One of the greatest challenges we face in the twenty-first century is to sustainably feed nine to ten billion people by 2050 while at the same time reducing environmental impact (e.g. greenhouse gas (GHG) emissions, biodiversity loss, land use change and loss of ecosystem services). To this end, food security must be delivered. According to the United Nations definition, 'food security exists when all people, at all times, have physical and economic access to sufficient, safe and nutritious food to meet their dietary needs and food preferences for an active and healthy life'. At the same time as delivering food security, we must also reduce the environmental impact of food production. Future climate change will make an impact upon food production. On the other hand, agriculture contributes up to about 30% of the anthropogenic GHG emissions that drive climate change. The aim of this review is to outline some of the likely impacts of climate change on agriculture, the mitigation measures available within agriculture to reduce GHG emissions and outlines the very significant challenge of feeding nine to ten billion people sustainably under a future climate, with reduced emissions of GHG. Each challenge is in itself enormous, requiring solutions that co-deliver on all aspects. We conclude that the status quo is not an option, and tinkering with the current production systems is unlikely to deliver the food and ecosystems services we need in the future; radical changes in production and consumption are likely to be required over the coming decades.
Design of Plant Gas Exchange Experiments in a Variable Pressure Growth Chamber
NASA Technical Reports Server (NTRS)
Corey, Kenneth A.
1996-01-01
Sustainable human presence in extreme environments such as lunar and martian bases will require bioregenerative components to human life support systems where plants are used for generation of oxygen, food, and water. Reduced atmospheric pressures will be used to minimize mass and engineering requirements. Few studies have assessed the metabolic and developmental responses of plants to reduced pressure and varied oxygen atmospheres. The first tests of hypobaric pressures on plant gas exchange and biomass production at the Johnson Space Center will be initiated in January 1996 in the Variable Pressure Growth Chamber (VPGC), a large, closed plant growth chamber rated for 10.2 psi. Experiments were designed and protocols detailed for two complete growouts each of lettuce and wheat to generate a general database for human life support requirements and to answer questions about plant growth processes in reduced pressure and varied oxygen environments. The central objective of crop growth studies in the VPGC is to determine the influence of reduced pressure and reduced oxygen on the rates of photosynthesis, dark respiration, evapotranspiration and biomass production of lettuce and wheat. Due to the constraint of one experimental unit, internal controls, called pressure transients, will be used to evaluate rates of CO2 uptake, O2 evolution, and H2O generation. Pressure transients will give interpretive power to the results of repeated growouts at both reduced and ambient pressures. Other experiments involve the generation of response functions to partial pressures of O2 and CO2 and to light intensity. Protocol for determining and calculating rates of gas exchange have been detailed. In order to build these databases and implement the necessary treatment combinations in short time periods, specific requirements for gas injections and removals have been defined. A set of system capability checks will include determination of leakage rates conducted prior to the actual crop growouts. Schedules of experimental events for lettuce and wheat are outlined and include replications in time of diurnal routines, pressure transients, variable pO2, pO2/pCO2 ratio, and light intensity responses.
Computer-Assisted Periodical Routing and Renewal Audit
ERIC Educational Resources Information Center
Yerkey, A. Neil
1973-01-01
A computer-assisted periodical control system was designed to reduce clerical time required to maintain records in three areas: renewal audit, routing, and records-keeping. The renewal audit features are unusual and are described in detail. (3 references) (Author/DH)
Effects of Repair on Structural Integrity.
DOT National Transportation Integrated Search
1993-12-01
Commercial aircraft operators are required by FAA regulations to repair damaged aircraft structures. These repairs must be performed in a timely manner to reduce aircraft downtime and loss of revenue. A guiding principle that has been used for many a...
John-Baptiste, A.; Sowerby, L.J.; Chin, C.J.; Martin, J.; Rotenberg, B.W.
2016-01-01
Background: When prearranged standard surgical trays contain instruments that are repeatedly unused, the redundancy can result in unnecessary health care costs. Our objective was to estimate potential savings by performing an economic evaluation comparing the cost of surgical trays with redundant instruments with surgical trays with reduced instruments ("reduced trays"). Methods: We performed a cost-analysis from the hospital perspective over a 1-year period. Using a mathematical model, we compared the direct costs of trays containing redundant instruments to reduced trays for 5 otolaryngology procedures. We incorporated data from several sources including local hospital data on surgical volume, the number of instruments on redundant and reduced trays, wages of personnel and time required to pack instruments. From the literature, we incorporated instrument depreciation costs and the time required to decontaminate an instrument. We performed 1-way sensitivity analyses on all variables, including surgical volume. Costs were estimated in 2013 Canadian dollars. Results: The cost of redundant trays was $21 806 and the cost of reduced trays was $8803, for a 1-year cost saving of $13 003. In sensitivity analyses, cost savings ranged from $3262 to $21 395, based on the surgical volume at the institution. Variation in surgical volume resulted in a wider range of estimates, with a minimum of $3253 for low-volume to a maximum of $52 012 for high-volume institutions. Interpretation: Our study suggests moderate savings may be achieved by reducing surgical tray redundancy and, if applied to other surgical specialties, may result in savings to Canadian health care systems. PMID:27975045
John-Baptiste, A; Sowerby, L J; Chin, C J; Martin, J; Rotenberg, B W
2016-01-01
When prearranged standard surgical trays contain instruments that are repeatedly unused, the redundancy can result in unnecessary health care costs. Our objective was to estimate potential savings by performing an economic evaluation comparing the cost of surgical trays with redundant instruments with surgical trays with reduced instruments ("reduced trays"). We performed a cost-analysis from the hospital perspective over a 1-year period. Using a mathematical model, we compared the direct costs of trays containing redundant instruments to reduced trays for 5 otolaryngology procedures. We incorporated data from several sources including local hospital data on surgical volume, the number of instruments on redundant and reduced trays, wages of personnel and time required to pack instruments. From the literature, we incorporated instrument depreciation costs and the time required to decontaminate an instrument. We performed 1-way sensitivity analyses on all variables, including surgical volume. Costs were estimated in 2013 Canadian dollars. The cost of redundant trays was $21 806 and the cost of reduced trays was $8803, for a 1-year cost saving of $13 003. In sensitivity analyses, cost savings ranged from $3262 to $21 395, based on the surgical volume at the institution. Variation in surgical volume resulted in a wider range of estimates, with a minimum of $3253 for low-volume to a maximum of $52 012 for high-volume institutions. Our study suggests moderate savings may be achieved by reducing surgical tray redundancy and, if applied to other surgical specialties, may result in savings to Canadian health care systems.
Piezosurgery®-assisted periodontally accelerated osteogenic orthodontics.
Pakhare, Vikas Vilas; Khandait, Chinmay Harishchandra; Shrivastav, Sunita Satish; Dhadse, Prasad Vijayrao; Baliga, Vidya Sudhindhra; Seegavadi, Vasudevan Dwarkanathan
2017-01-01
Periodontally accelerated osteogenic orthodontic procedure has become useful adjunct to reduce orthodontic treatment time as compared with conventional orthodontics. This case demonstrates the use of Piezosurgery ® to facilitate rapid tooth movement with relatively shorter treatment time. A 23-year-old male with Angles Class I malocclusion having spaced anterior teeth and protrusion requested orthodontic treatment with reduced time period. Before surgery, presurgical orthodontic treatment was done to do initial alignment of the teeth. This was followed by piezosurgical corticotomy and final space closure was achieved by active orthodontic tooth movement. The total treatment time required to complete the orthodontic treatment was 5 months. 1-year follow-up revealed no evidence of any adverse periodontal effects or relapse. Thus, Piezosurgery ® -assisted corticotomy may prove to be a noble and effective treatment approach to decrease the orthodontic treatment time.
Piezosurgery®-assisted periodontally accelerated osteogenic orthodontics
Pakhare, Vikas Vilas; Khandait, Chinmay Harishchandra; Shrivastav, Sunita Satish; Dhadse, Prasad Vijayrao; Baliga, Vidya Sudhindhra; Seegavadi, Vasudevan Dwarkanathan
2017-01-01
Periodontally accelerated osteogenic orthodontic procedure has become useful adjunct to reduce orthodontic treatment time as compared with conventional orthodontics. This case demonstrates the use of Piezosurgery® to facilitate rapid tooth movement with relatively shorter treatment time. A 23-year-old male with Angles Class I malocclusion having spaced anterior teeth and protrusion requested orthodontic treatment with reduced time period. Before surgery, presurgical orthodontic treatment was done to do initial alignment of the teeth. This was followed by piezosurgical corticotomy and final space closure was achieved by active orthodontic tooth movement. The total treatment time required to complete the orthodontic treatment was 5 months. 1-year follow-up revealed no evidence of any adverse periodontal effects or relapse. Thus, Piezosurgery®-assisted corticotomy may prove to be a noble and effective treatment approach to decrease the orthodontic treatment time. PMID:29491592
2013-01-01
Background A smartcard is an integrated circuit card that provides identification, authentication, data storage, and application processing. Among other functions, smartcards can serve as credit and ATM cards and can be used to pay various invoices using a ‘reader’. This study looks at the unit cost and activity time of both a traditional cash billing service and a newly introduced smartcard billing service in an outpatient department in a hospital in Taipei, Taiwan. Methods The activity time required in using the cash billing service was determined via a time and motion study. A cost analysis was used to compare the unit costs of the two services. A sensitivity analysis was also performed to determine the effect of smartcard use and number of cashier windows on incremental cost and waiting time. Results Overall, the smartcard system had a higher unit cost because of the additional service fees and business tax, but it reduced patient waiting time by at least 8 minutes. Thus, it is a convenient service for patients. In addition, if half of all outpatients used smartcards to pay their invoices, along with four cashier windows for cash payments, then the waiting time of cash service users could be reduced by approximately 3 minutes and the incremental cost would be close to breaking even (even though it has a higher overall unit cost that the traditional service). Conclusions Traditional cash billing services are time consuming and require patients to carry large sums of money. Smartcard services enable patients to pay their bill immediately in the outpatient clinic and offer greater security and convenience. The idle time of nurses could also be reduced as they help to process smartcard payments. A reduction in idle time reduces hospital costs. However, the cost of the smartcard service is higher than the cash service and, as such, hospital administrators must weigh the costs and benefits of introducing a smartcard service. In addition to the obvious benefits of the smartcard service, there is also scope to extend its use in a hospital setting to include the notification of patient arrival and use in other departments. PMID:23763904
Chu, Kuan-Yu; Huang, Chunmin
2013-06-13
A smartcard is an integrated circuit card that provides identification, authentication, data storage, and application processing. Among other functions, smartcards can serve as credit and ATM cards and can be used to pay various invoices using a 'reader'. This study looks at the unit cost and activity time of both a traditional cash billing service and a newly introduced smartcard billing service in an outpatient department in a hospital in Taipei, Taiwan. The activity time required in using the cash billing service was determined via a time and motion study. A cost analysis was used to compare the unit costs of the two services. A sensitivity analysis was also performed to determine the effect of smartcard use and number of cashier windows on incremental cost and waiting time. Overall, the smartcard system had a higher unit cost because of the additional service fees and business tax, but it reduced patient waiting time by at least 8 minutes. Thus, it is a convenient service for patients. In addition, if half of all outpatients used smartcards to pay their invoices, along with four cashier windows for cash payments, then the waiting time of cash service users could be reduced by approximately 3 minutes and the incremental cost would be close to breaking even (even though it has a higher overall unit cost that the traditional service). Traditional cash billing services are time consuming and require patients to carry large sums of money. Smartcard services enable patients to pay their bill immediately in the outpatient clinic and offer greater security and convenience. The idle time of nurses could also be reduced as they help to process smartcard payments. A reduction in idle time reduces hospital costs. However, the cost of the smartcard service is higher than the cash service and, as such, hospital administrators must weigh the costs and benefits of introducing a smartcard service. In addition to the obvious benefits of the smartcard service, there is also scope to extend its use in a hospital setting to include the notification of patient arrival and use in other departments.
Experimental Study on Semi-Dry Flue Gas Desulfurization Ash Used in Steel Slag Composite Material
NASA Astrophysics Data System (ADS)
Lu, Lijun; Fang, Honghui
This article carried out the experimental study on using desulfurization ash in steel slag composite material. This was done by investigating the desulfurization ash content in formula one and formula two samples on the influence of setting time and strength of mortar. Through this study the following conclusions were reached for formula one: (1) a setting time of more than 10 hours is required, (2) a dosage of desulfurization ash of 1 2% is optimal, where flexural strength is reduced by 10% 23% and compressive strength reduced by 5.7% 16.4%. The conclusions of formula two were: (1) when the dosage of desulfurization ash is within 5%, the setting time is within 10 hours; (2) when the dosage of desulfurization ash is 1 2%, the flexural strength is increased by 5 7% and the compressive strength is reduced by 1 2%. The results show that the formula two is better.
Biotechnology Research Requirements for Aeronautical Systems through the Year 2000. Volume 1
1982-07-30
signature. Reducing the aero contain radic.?z,:"ve fallout. vehicle’s detectibility by reducing the optical, electro, acoustic, and infrared 0 Pulsed...arsenal devoted to agent doses. At this time, basic biochemical and chemical agents, reported use of mycotoxins and pharmacological data are minimal and...mass spectroscopy , quartz chemical agents must be developed. This research microbalances, Industrial hygiene dosimetry, damp should quantify human
Real-Time Simulation of Ares I Launch Vehicle
NASA Technical Reports Server (NTRS)
Tobbe, Patrick; Matras, Alex; Wilson, Heath; Alday, Nathan; Walker, David; Betts, Kevin; Hughes, Ryan; Turbe, Michael
2009-01-01
The Ares Real-Time Environment for Modeling, Integration, and Simulation (ARTEMIS) has been developed for use by the Ares I launch vehicle System Integration Laboratory (SIL) at the Marshall Space Flight Center (MSFC). The primary purpose of the Ares SIL is to test the vehicle avionics hardware and software in a hardware-in-the-loop (HWIL) environment to certify that the integrated system is prepared for flight. ARTEMIS has been designed to be the real-time software backbone to stimulate all required Ares components through high-fidelity simulation. ARTEMIS has been designed to take full advantage of the advances in underlying computational power now available to support HWIL testing. A modular real-time design relying on a fully distributed computing architecture has been achieved. Two fundamental requirements drove ARTEMIS to pursue the use of high-fidelity simulation models in a real-time environment. First, ARTEMIS must be used to test a man-rated integrated avionics hardware and software system, thus requiring a wide variety of nominal and off-nominal simulation capabilities to certify system robustness. The second driving requirement - derived from a nationwide review of current state-of-the-art HWIL facilities - was that preserving digital model fidelity significantly reduced overall vehicle lifecycle cost by reducing testing time for certification runs and increasing flight tempo through an expanded operational envelope. These two driving requirements necessitated the use of high-fidelity models throughout the ARTEMIS simulation. The nature of the Ares mission profile imposed a variety of additional requirements on the ARTEMIS simulation. The Ares I vehicle is composed of multiple elements, including the First Stage Solid Rocket Booster (SRB), the Upper Stage powered by the J- 2X engine, the Orion Crew Exploration Vehicle (CEV) which houses the crew, the Launch Abort System (LAS), and various secondary elements that separate from the vehicle. At launch, the integrated vehicle stack is composed of these stages, and throughout the mission, various elements separate from the integrated stack and tumble back towards the earth. ARTEMIS must be capable of simulating the integrated stack through the flight as well as propagating each individual element after separation. In addition, abort sequences can lead to other unique configurations of the integrated stack as the timing and sequence of the stage separations are altered.
Madara Marasinghe, Keshini
2016-01-01
The world population is rapidly ageing. As population age, the incidence of functional limitations increases, demanding higher levels of care from caregivers. Assistive technologies improve individuals' functioning, independence, well-being and quality of life. By increasing independence of older adults, assistive technologies decrease workloads required from informal caregivers. This review investigates, evaluates, and synthesises existing findings to examine whether and how assistive technologies reduce caregiver burden. Databases searched included MEDLINE, EMBASE, Scopus, and Cochrane Library. Three groups of keywords were combined: those relating to assistive technology, caregiver burden, and older adults. Two theories emerged from the analysis of study results. Caregivers reported that assistive technologies decrease caregiver burden. However, caregivers had concerns that assistive technologies could add to caregiver burden, highlighting the limitations of assistive technology. As suggested by a majority of the studies in this review, assistive technologies contribute to reducing caregiver burden among caregivers of older adults. Assistive technologies assisted caregivers by reducing time, levels of assistance and energy put towards caregiving, anxiety and fear, task difficulty, safety risk particularly for activities requiring physical assistance and increasing the independence of the users. Further research is required to better understand limitations of assistive technologies. Implications for Rehabilitation Support for informal caregivers of older adults need more attention and recognition. Assistive technologies can reduce caregiver burden among informal caregivers of older adults. Further research is required to better understand the effectiveness of assistive technologies in reducing caregiver burden as well as limitations and barriers associated with using assistive technologies.
Practical solutions for reducing container ships' waiting times at ports using simulation model
NASA Astrophysics Data System (ADS)
Sheikholeslami, Abdorreza; Ilati, Gholamreza; Yeganeh, Yones Eftekhari
2013-12-01
The main challenge for container ports is the planning required for berthing container ships while docked in port. Growth of containerization is creating problems for ports and container terminals as they reach their capacity limits of various resources which increasingly leads to traffic and port congestion. Good planning and management of container terminal operations reduces waiting time for liner ships. Reducing the waiting time improves the terminal's productivity and decreases the port difficulties. Two important keys to reducing waiting time with berth allocation are determining suitable access channel depths and increasing the number of berths which in this paper are studied and analyzed as practical solutions. Simulation based analysis is the only way to understand how various resources interact with each other and how they are affected in the berthing time of ships. We used the Enterprise Dynamics software to produce simulation models due to the complexity and nature of the problems. We further present case study for berth allocation simulation of the biggest container terminal in Iran and the optimum access channel depth and the number of berths are obtained from simulation results. The results show a significant reduction in the waiting time for container ships and can be useful for major functions in operations and development of container ship terminals.
Time Resolved FTIR Analysis of Tailpipe Exhaust for Several Automobiles
NASA Astrophysics Data System (ADS)
White, Allen R.; Allen, James; Devasher, Rebecca B.
2011-06-01
The automotive catalytic converter reduces or eliminates the emission of various chemical species (e.g. CO, hydrocarbons, etc.) that are the products of combustion from automobile exhaust. However, these units are only effective once they have reached operating temperature. The design and placement of catalytic converters has changed in order to reduce both the quantity of emissions and the time that is required for the converter to be effective. In order to compare the effectiveness of catalytic converters, time-resolved measurements were performed on several vehicles, including a 2010 Toyota Prius, a 2010 Honda Fit, a 1994 Honda Civic, and a 1967 Oldsmobile 442 (which is not equipped with a catalytic converter but is used as a baseline). The newer vehicles demonstrate bot a reduced overall level of CO and hydrocarbon emissions but are also effective more quickly than older units. The time-resolved emissions will be discussed along with the impact of catalytic converter design and location on the measured emissions.
NASA Technical Reports Server (NTRS)
Patniak, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.
1998-01-01
Nonlinear mathematical-programming-based design optimization can be an elegant method. However, the calculations required to generate the merit function, constraints, and their gradients, which are frequently required, can make the process computational intensive. The computational burden can be greatly reduced by using approximating analyzers derived from an original analyzer utilizing neural networks and linear regression methods. The experience gained from using both of these approximation methods in the design optimization of a high speed civil transport aircraft is the subject of this paper. The Langley Research Center's Flight Optimization System was selected for the aircraft analysis. This software was exercised to generate a set of training data with which a neural network and a regression method were trained, thereby producing the two approximating analyzers. The derived analyzers were coupled to the Lewis Research Center's CometBoards test bed to provide the optimization capability. With the combined software, both approximation methods were examined for use in aircraft design optimization, and both performed satisfactorily. The CPU time for solution of the problem, which had been measured in hours, was reduced to minutes with the neural network approximation and to seconds with the regression method. Instability encountered in the aircraft analysis software at certain design points was also eliminated. On the other hand, there were costs and difficulties associated with training the approximating analyzers. The CPU time required to generate the input-output pairs and to train the approximating analyzers was seven times that required for solution of the problem.
Ramp Technology and Intelligent Processing in Small Manufacturing
NASA Technical Reports Server (NTRS)
Rentz, Richard E.
1992-01-01
To address the issues of excessive inventories and increasing procurement lead times, the Navy is actively pursuing flexible computer integrated manufacturing (FCIM) technologies, integrated by communication networks to respond rapidly to its requirements for parts. The Rapid Acquisition of Manufactured Parts (RAMP) program, initiated in 1986, is an integral part of this effort. The RAMP program's goal is to reduce the current average production lead times experienced by the Navy's inventory control points by a factor of 90 percent. The manufacturing engineering component of the RAMP architecture utilizes an intelligent processing technology built around a knowledge-based shell provided by ICAD, Inc. Rules and data bases in the software simulate an expert manufacturing planner's knowledge of shop processes and equipment. This expert system can use Product Data Exchange using STEP (PDES) data to determine what features the required part has, what material is required to manufacture it, what machines and tools are needed, and how the part should be held (fixtured) for machining, among other factors. The program's rule base then indicates, for example, how to make each feature, in what order to make it, and to which machines on the shop floor the part should be routed for processing. This information becomes part of the shop work order. The process planning function under RAMP greatly reduces the time and effort required to complete a process plan. Since the PDES file that drives the intelligent processing is 100 percent complete and accurate to start with, the potential for costly errors is greatly diminished.
NASA Astrophysics Data System (ADS)
Aviles-Espinosa, Rodrigo; Santos, Susana I. C. O.; Brodschelm, Andreas; Kaenders, Wilhelm G.; Alonso-Ortega, Cesar; Artigas, David; Loza-Alvarez, Pablo
2011-03-01
In-vivo microscopic long term time-lapse studies require controlled imaging conditions to preserve sample viability. Therefore it is crucial to meet specific exposure conditions as these may limit the applicability of established techniques. In this work we demonstrate the use of third harmonic generation (THG) microscopy for long term time-lapse three-dimensional studies (4D) in living Caenorhabditis elegans embryos employing a 1550 nm femtosecond fiber laser. We take advantage of the fact that THG only requires the existence of interfaces to generate signal or a change in the refractive index or in the χ3 nonlinear coefficient, therefore no markers are required. In addition, by using this wavelength the emitted THG signal is generated at visible wavelengths (516 nm) enabling the use of standard collection optics and detectors operating near their maximum efficiency. This enables the reduction of the incident light intensity at the sample plane allowing to image the sample for several hours. THG signal is obtained through all embryo development stages, providing different tissue/structure information. By means of control samples, we demonstrate that the expected water absorption at this wavelength does not severely compromise sample viability. Certainly, this technique reduces the complexity of sample preparation (i.e. genetic modification) required by established linear and nonlinear fluorescence based techniques. We demonstrate the non-invasiveness, reduced specimen interference, and strong potential of this particular wavelength to be used to perform long-term 4D recordings.
Ramp technology and intelligent processing in small manufacturing
NASA Astrophysics Data System (ADS)
Rentz, Richard E.
1992-04-01
To address the issues of excessive inventories and increasing procurement lead times, the Navy is actively pursuing flexible computer integrated manufacturing (FCIM) technologies, integrated by communication networks to respond rapidly to its requirements for parts. The Rapid Acquisition of Manufactured Parts (RAMP) program, initiated in 1986, is an integral part of this effort. The RAMP program's goal is to reduce the current average production lead times experienced by the Navy's inventory control points by a factor of 90 percent. The manufacturing engineering component of the RAMP architecture utilizes an intelligent processing technology built around a knowledge-based shell provided by ICAD, Inc. Rules and data bases in the software simulate an expert manufacturing planner's knowledge of shop processes and equipment. This expert system can use Product Data Exchange using STEP (PDES) data to determine what features the required part has, what material is required to manufacture it, what machines and tools are needed, and how the part should be held (fixtured) for machining, among other factors. The program's rule base then indicates, for example, how to make each feature, in what order to make it, and to which machines on the shop floor the part should be routed for processing. This information becomes part of the shop work order. The process planning function under RAMP greatly reduces the time and effort required to complete a process plan. Since the PDES file that drives the intelligent processing is 100 percent complete and accurate to start with, the potential for costly errors is greatly diminished.
USDA-ARS?s Scientific Manuscript database
Due the biennial generation time of onion, classical crossing takes at least four years to classify cytoplasms as normal (N) male-fertile or male-sterile (S). Molecular markers in the organellar DNAs that distinguish N and S cytoplasms are useful to reduce the time required to classify onion cytopla...
NASA Astrophysics Data System (ADS)
Li, Gongxin; Li, Peng; Wang, Yuechao; Wang, Wenxue; Xi, Ning; Liu, Lianqing
2014-07-01
Scanning Ion Conductance Microscopy (SICM) is one kind of Scanning Probe Microscopies (SPMs), and it is widely used in imaging soft samples for many distinctive advantages. However, the scanning speed of SICM is much slower than other SPMs. Compressive sensing (CS) could improve scanning speed tremendously by breaking through the Shannon sampling theorem, but it still requires too much time in image reconstruction. Block compressive sensing can be applied to SICM imaging to further reduce the reconstruction time of sparse signals, and it has another unique application that it can achieve the function of image real-time display in SICM imaging. In this article, a new method of dividing blocks and a new matrix arithmetic operation were proposed to build the block compressive sensing model, and several experiments were carried out to verify the superiority of block compressive sensing in reducing imaging time and real-time display in SICM imaging.
NASA Astrophysics Data System (ADS)
Salmon, B. P.; Kleynhans, W.; Olivier, J. C.; van den Bergh, F.; Wessels, K. J.
2018-05-01
Humans are transforming land cover at an ever-increasing rate. Accurate geographical maps on land cover, especially rural and urban settlements are essential to planning sustainable development. Time series extracted from MODerate resolution Imaging Spectroradiometer (MODIS) land surface reflectance products have been used to differentiate land cover classes by analyzing the seasonal patterns in reflectance values. The proper fitting of a parametric model to these time series usually requires several adjustments to the regression method. To reduce the workload, a global setting of parameters is done to the regression method for a geographical area. In this work we have modified a meta-optimization approach to setting a regression method to extract the parameters on a per time series basis. The standard deviation of the model parameters and magnitude of residuals are used as scoring function. We successfully fitted a triply modulated model to the seasonal patterns of our study area using a non-linear extended Kalman filter (EKF). The approach uses temporal information which significantly reduces the processing time and storage requirements to process each time series. It also derives reliability metrics for each time series individually. The features extracted using the proposed method are classified with a support vector machine and the performance of the method is compared to the original approach on our ground truth data.
POD/DEIM reduced-order strategies for efficient four dimensional variational data assimilation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ştefănescu, R., E-mail: rstefane@vt.edu; Sandu, A., E-mail: sandu@cs.vt.edu; Navon, I.M., E-mail: inavon@fsu.edu
2015-08-15
This work studies reduced order modeling (ROM) approaches to speed up the solution of variational data assimilation problems with large scale nonlinear dynamical models. It is shown that a key requirement for a successful reduced order solution is that reduced order Karush–Kuhn–Tucker conditions accurately represent their full order counterparts. In particular, accurate reduced order approximations are needed for the forward and adjoint dynamical models, as well as for the reduced gradient. New strategies to construct reduced order based are developed for proper orthogonal decomposition (POD) ROM data assimilation using both Galerkin and Petrov–Galerkin projections. For the first time POD, tensorialmore » POD, and discrete empirical interpolation method (DEIM) are employed to develop reduced data assimilation systems for a geophysical flow model, namely, the two dimensional shallow water equations. Numerical experiments confirm the theoretical framework for Galerkin projection. In the case of Petrov–Galerkin projection, stabilization strategies must be considered for the reduced order models. The new reduced order shallow water data assimilation system provides analyses similar to those produced by the full resolution data assimilation system in one tenth of the computational time.« less
NASA Technical Reports Server (NTRS)
Kapoor, Manju M.; Mehta, Manju
2010-01-01
The goal of this paper is to emphasize the importance of developing complete and unambiguous requirements early in the project cycle (prior to Preliminary Design Phase). Having a complete set of requirements early in the project cycle allows sufficient time to generate a traceability matrix. Requirements traceability and analysis are the key elements in improving verification and validation process, and thus overall software quality. Traceability can be most beneficial when the system changes. If changes are made to high-level requirements it implies that low-level requirements need to be modified. Traceability ensures that requirements are appropriately and efficiently verified at various levels whereas analysis ensures that a rightly interpreted set of requirements is produced.
Control Aspects of Highly Constrained Guidance Techniques
1978-02-01
cycle. The advantages of this approach are (1) it requires only one time- consuming computation of the platform-to-body transformation matrix from...of steering gain corresponding to the three autopilot configurations, Kchange is KFCS change 2 0.0006 5 0.00156 8 0.00256 2.7 Terminal Steering As...a time- consuming process that it is desirable to consider ways of reducing the com- putation time by approximating the elements of B and/or updating
Cheng, Kung-Shan; Dewhirst, Mark W; Stauffer, Paul R; Das, Shiva
2010-03-01
This paper investigates overall theoretical requirements for reducing the times required for the iterative learning of a real-time image-guided adaptive control routine for multiple-source heat applicators, as used in hyperthermia and thermal ablative therapy for cancer. Methods for partial reconstruction of the physical system with and without model reduction to find solutions within a clinically practical timeframe were analyzed. A mathematical analysis based on the Fredholm alternative theorem (FAT) was used to compactly analyze the existence and uniqueness of the optimal heating vector under two fundamental situations: (1) noiseless partial reconstruction and (2) noisy partial reconstruction. These results were coupled with a method for further acceleration of the solution using virtual source (VS) model reduction. The matrix approximation theorem (MAT) was used to choose the optimal vectors spanning the reduced-order subspace to reduce the time for system reconstruction and to determine the associated approximation error. Numerical simulations of the adaptive control of hyperthermia using VS were also performed to test the predictions derived from the theoretical analysis. A thigh sarcoma patient model surrounded by a ten-antenna phased-array applicator was retained for this purpose. The impacts of the convective cooling from blood flow and the presence of sudden increase of perfusion in muscle and tumor were also simulated. By FAT, partial system reconstruction directly conducted in the full space of the physical variables such as phases and magnitudes of the heat sources cannot guarantee reconstructing the optimal system to determine the global optimal setting of the heat sources. A remedy for this limitation is to conduct the partial reconstruction within a reduced-order subspace spanned by the first few maximum eigenvectors of the true system matrix. By MAT, this VS subspace is the optimal one when the goal is to maximize the average tumor temperature. When more than 6 sources present, the steps required for a nonlinear learning scheme is theoretically fewer than that of a linear one, however, finite number of iterative corrections is necessary for a single learning step of a nonlinear algorithm. Thus, the actual computational workload for a nonlinear algorithm is not necessarily less than that required by a linear algorithm. Based on the analysis presented herein, obtaining a unique global optimal heating vector for a multiple-source applicator within the constraints of real-time clinical hyperthermia treatments and thermal ablative therapies appears attainable using partial reconstruction with minimum norm least-squares method with supplemental equations. One way to supplement equations is the inclusion of a method of model reduction.
The role of series ankle elasticity in bipedal walking
Zelik, Karl E.; Huang, Tzu-Wei P.; Adamczyk, Peter G.; Kuo, Arthur D.
2014-01-01
The elastic stretch-shortening cycle of the Achilles tendon during walking can reduce the active work demands on the plantarflexor muscles in series. However, this does not explain why or when this ankle work, whether by muscle or tendon, needs to be performed during gait. We therefore employ a simple bipedal walking model to investigate how ankle work and series elasticity impact economical locomotion. Our model shows that ankle elasticity can use passive dynamics to aid push-off late in single support, redirecting the body's center-of-mass (COM) motion upward. An appropriately timed, elastic push-off helps to reduce dissipative collision losses at contralateral heelstrike, and therefore the positive work needed to offset those losses and power steady walking. Thus, the model demonstrates how elastic ankle work can reduce the total energetic demands of walking, including work required from more proximal knee and hip muscles. We found that the key requirement for using ankle elasticity to achieve economical gait is the proper ratio of ankle stiffness to foot length. Optimal combination of these parameters ensures proper timing of elastic energy release prior to contralateral heelstrike, and sufficient energy storage to redirect the COM velocity. In fact, there exist parameter combinations that theoretically yield collision-free walking, thus requiring zero active work, albeit with relatively high ankle torques. Ankle elasticity also allows the hip to power economical walking by contributing indirectly to push-off. Whether walking is powered by the ankle or hip, ankle elasticity may aid walking economy by reducing collision losses. PMID:24365635
The role of series ankle elasticity in bipedal walking.
Zelik, Karl E; Huang, Tzu-Wei P; Adamczyk, Peter G; Kuo, Arthur D
2014-04-07
The elastic stretch-shortening cycle of the Achilles tendon during walking can reduce the active work demands on the plantarflexor muscles in series. However, this does not explain why or when this ankle work, whether by muscle or tendon, needs to be performed during gait. We therefore employ a simple bipedal walking model to investigate how ankle work and series elasticity impact economical locomotion. Our model shows that ankle elasticity can use passive dynamics to aid push-off late in single support, redirecting the body's center-of-mass (COM) motion upward. An appropriately timed, elastic push-off helps to reduce dissipative collision losses at contralateral heelstrike, and therefore the positive work needed to offset those losses and power steady walking. Thus, the model demonstrates how elastic ankle work can reduce the total energetic demands of walking, including work required from more proximal knee and hip muscles. We found that the key requirement for using ankle elasticity to achieve economical gait is the proper ratio of ankle stiffness to foot length. Optimal combination of these parameters ensures proper timing of elastic energy release prior to contralateral heelstrike, and sufficient energy storage to redirect the COM velocity. In fact, there exist parameter combinations that theoretically yield collision-free walking, thus requiring zero active work, albeit with relatively high ankle torques. Ankle elasticity also allows the hip to power economical walking by contributing indirectly to push-off. Whether walking is powered by the ankle or hip, ankle elasticity may aid walking economy by reducing collision losses. Copyright © 2013 Elsevier Ltd. All rights reserved.
A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks.
Röhl, Annika; Bockmayr, Alexander
2017-01-03
Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.
Janssens, Sarah; Beckmann, Michael; Bonney, Donna
2015-08-01
Simulation training in laparoscopic surgery has been shown to improve surgical performance. To describe the implementation of a laparoscopic simulation training and credentialing program for gynaecology registrars. A pilot program consisting of protected, supervised laparoscopic simulation time, a tailored curriculum and a credentialing process, was developed and implemented. Quantitative measures assessing simulated surgical performance were measured over the simulation training period. Laparoscopic procedures requiring credentialing were assessed for both the frequency of a registrar being the primary operator and the duration of surgery and compared to a presimulation cohort. Qualitative measures regarding quality of surgical training were assessed pre- and postsimulation. Improvements were seen in simulated surgical performance in efficiency domains. Operative time for procedures requiring credentialing was reduced by 12%. Primary operator status in the operating theatre for registrars was unchanged. Registrar assessment of training quality improved. The introduction of a laparoscopic simulation training and credentialing program resulted in improvements in simulated performance, reduced operative time and improved registrar assessment of the quality of training. © 2015 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.
Numerical Analysis of Heat Transfer During Quenching Process
NASA Astrophysics Data System (ADS)
Madireddi, Sowjanya; Krishnan, Krishnan Nambudiripad; Reddy, Ammana Satyanarayana
2018-04-01
A numerical model is developed to simulate the immersion quenching process of metals. The time of quench plays an important role if the process involves a defined step quenching schedule to obtain the desired characteristics. Lumped heat capacity analysis used for this purpose requires the value of heat transfer coefficient, whose evaluation requires large experimental data. Experimentation on a sample work piece may not represent the actual component which may vary in dimension. A Fluid-Structure interaction technique with a coupled interface between the solid (metal) and liquid (quenchant) is used for the simulations. Initial times of quenching shows boiling heat transfer phenomenon with high values of heat transfer coefficients (5000-2.5 × 105 W/m2K). Shape of the work piece with equal dimension shows less influence on the cooling rate Non-uniformity in hardness at the sharp corners can be reduced by rounding off the edges. For a square piece of 20 mm thickness, with 3 mm fillet radius, this difference is reduced by 73 %. The model can be used for any metal-quenchant combination to obtain time-temperature data without the necessity of experimentation.
Time-Shifted Boundary Conditions Used for Navier-Stokes Aeroelastic Solver
NASA Technical Reports Server (NTRS)
Srivastava, Rakesh
1999-01-01
Under the Advanced Subsonic Technology (AST) Program, an aeroelastic analysis code (TURBO-AE) based on Navier-Stokes equations is currently under development at NASA Lewis Research Center s Machine Dynamics Branch. For a blade row, aeroelastic instability can occur in any of the possible interblade phase angles (IBPA s). Analyzing small IBPA s is very computationally expensive because a large number of blade passages must be simulated. To reduce the computational cost of these analyses, we used time shifted, or phase-lagged, boundary conditions in the TURBO-AE code. These conditions can be used to reduce the computational domain to a single blade passage by requiring the boundary conditions across the passage to be lagged depending on the IBPA being analyzed. The time-shifted boundary conditions currently implemented are based on the direct-store method. This method requires large amounts of data to be stored over a period of the oscillation cycle. On CRAY computers this is not a major problem because solid-state devices can be used for fast input and output to read and write the data onto a disk instead of storing it in core memory.
Impact of 50% Synthesized Iso-Paraffins (SIP) on Middle Distillate Fuel Filtration and Coalescence
2014-10-30
Paraffins DEFINITIONS Coalescence - the ability to shed water Conventional Material Source - crude oil, natural gas liquid condensates...heavy oil, shale oil, and oil sands Effluent - stream leaving a system Influent - stream entering a system Turnover - time required to flow the...separators are used onboard naval vessels (required onboard gas turbine ships and some diesel engine ships) and at shore stations to reduce solid and free
Digital Control Technologies for Modular DC-DC Converters
NASA Technical Reports Server (NTRS)
Button, Robert M.; Kascak, Peter E.; Lebron-Velilla, Ramon
2002-01-01
Recent trends in aerospace Power Management and Distribution (PMAD) systems focus on using commercial off-the-shelf (COTS) components as standard building blocks. This move to more modular designs has been driven by a desire to reduce costs and development times, but is also due to the impressive power density and efficiency numbers achieved by today's commercial DC-DC converters. However, the PMAD designer quickly learns of the hidden "costs" of using COTS converters. The most significant cost is the required addition of external input filters to meet strict electromagnetic interference (MIAMI) requirements for space systems. In fact, the high power density numbers achieved by the commercial manufacturers are greatly due to the lack of necessary input filters included in the COTS module. The NASA Glenn Research Center is currently pursuing a digital control technology that addresses this problem with modular DC-DC converters. This paper presents the digital control technologies that have been developed to greatly reduce the input filter requirements for paralleled, modular DC-DC converters. Initial test result show that the input filter's inductor size was reduced by 75 percent, and the capacitor size was reduced by 94 percent while maintaining the same power quality specifications.
Bayesian networks in overlay recipe optimization
NASA Astrophysics Data System (ADS)
Binns, Lewis A.; Reynolds, Greg; Rigden, Timothy C.; Watkins, Stephen; Soroka, Andrew
2005-05-01
Currently, overlay measurements are characterized by "recipe", which defines both physical parameters such as focus, illumination et cetera, and also the software parameters such as algorithm to be used and regions of interest. Setting up these recipes requires both engineering time and wafer availability on an overlay tool, so reducing these requirements will result in higher tool productivity. One of the significant challenges to automating this process is that the parameters are highly and complexly correlated. At the same time, a high level of traceability and transparency is required in the recipe creation process, so a technique that maintains its decisions in terms of well defined physical parameters is desirable. Running time should be short, given the system (automatic recipe creation) is being implemented to reduce overheads. Finally, a failure of the system to determine acceptable parameters should be obvious, so a certainty metric is also desirable. The complex, nonlinear interactions make solution by an expert system difficult at best, especially in the verification of the resulting decision network. The transparency requirements tend to preclude classical neural networks and similar techniques. Genetic algorithms and other "global minimization" techniques require too much computational power (given system footprint and cost requirements). A Bayesian network, however, provides a solution to these requirements. Such a network, with appropriate priors, can be used during recipe creation / optimization not just to select a good set of parameters, but also to guide the direction of search, by evaluating the network state while only incomplete information is available. As a Bayesian network maintains an estimate of the probability distribution of nodal values, a maximum-entropy approach can be utilized to obtain a working recipe in a minimum or near-minimum number of steps. In this paper we discuss the potential use of a Bayesian network in such a capacity, reducing the amount of engineering intervention. We discuss the benefits of this approach, especially improved repeatability and traceability of the learning process, and quantification of uncertainty in decisions made. We also consider the problems associated with this approach, especially in detailed construction of network topology, validation of the Bayesian network and the recipes it generates, and issues arising from the integration of a Bayesian network with a complex multithreaded application; these primarily relate to maintaining Bayesian network and system architecture integrity.
Quality of Service for Real-Time Applications Over Next Generation Data Networks
NASA Technical Reports Server (NTRS)
Atiquzzaman, Mohammed; Jain, Raj
2001-01-01
This project, which started on January 1, 2000, was funded by the NASA Glenn Research Center for duration of one year. The deliverables of the project included the following tasks: (1) Study of QoS mapping between the edge and core networks envisioned in the Next Generation networks will provide us with the QoS guarantees that can be obtained from next generation networks; (2) Buffer management techniques to provide strict guarantees to real-time end-to-end applications through preferential treatment to packets belonging to real-time applications. In particular, use of ECN to help reduce the loss on high bandwidth-delay product satellite networks needs to be studied; (3) Effect of Prioritized Packet Discard to increase goodput of the network and reduce the buffering requirements in the ATM switches; (4) Provision of new IP circuit emulation services over Satellite IP backbones using MPLS will be studied; and (5) Determine the architecture and requirements for internetworking ATN and the Next Generation Internet for real-time applications. The project has been completed on time. All the objectives and deliverables of the project have been completed. Research results obtained from this project have been published in a number of papers in journals, conferences, and technical reports, included in this document.
Posture and activity recognition and energy expenditure prediction in a wearable platform.
Sazonova, Nadezhda; Browning, Raymond; Melanson, Edward; Sazonov, Edward
2014-01-01
The use of wearable sensors coupled with the processing power of mobile phones may be an attractive way to provide real-time feedback about physical activity and energy expenditure (EE). Here we describe use of a shoe-based wearable sensor system (SmartShoe) with a mobile phone for real-time prediction and display of time spent in various postures/physical activities and the resulting EE. To deal with processing power and memory limitations of the phone, we introduce new algorithms that require substantially less computational power. The algorithms were validated using data from 15 subjects who performed up to 15 different activities of daily living during a four-hour stay in a room calorimeter. Use of Multinomial Logistic Discrimination (MLD) for posture and activity classification resulted in an accuracy comparable to that of Support Vector Machines (SVM) (90% vs. 95%-98%) while reducing the running time by a factor of 190 and reducing the memory requirement by a factor of 104. Per minute EE estimation using activity-specific models resulted in an accurate EE prediction (RMSE of 0.53 METs vs. RMSE of 0.69 METs using previously reported SVM-branched models). These results demonstrate successful implementation of real-time physical activity monitoring and EE prediction system on a wearable platform.
Joint Contracture Orthosis (JCO)
NASA Technical Reports Server (NTRS)
Lunsford, Thomas R.; Parsons, Ken; Krouskop, Thomas; McGee, Kevin
1997-01-01
The purpose of this project was to develop an advanced orthosis which is effective in reducing upper and lower limb contractures in significantly less time than currently required with conventional methods. The team that developed the JCO consisted of an engineer, orthotist, therapist, and physician.
ERIC Educational Resources Information Center
Jesberg, Robert O.; Dowden, Edward
1986-01-01
Explains how computer and game port interfacing reduces the time required for data collection and organization and also stimulates student interest in science laboratory exercises. Illustrates this approach through a description of a population-variation lab. Includes diagrams for the construction of the interface box. (ML)
ERIC Educational Resources Information Center
Cowan, William M.
1984-01-01
Complying with regulations that require tactile signs to assist disabled persons is not as onerous as it seems. An intelligently developed signage system will reduce the amount of staff time needed to assist disabled people, most of whom prefer to find their own way. (TE)
Acton, W. Joe; Lanza, Matteo; Agarwal, Bishu; Jürschik, Simone; Sulzer, Philipp; Breiev, Kostiantyn; Jordan, Alfons; Hartungen, Eugen; Hanel, Gernot; Märk, Lukas; Mayhew, Chris A.; Märk, Tilmann D.
2014-01-01
The rapid expansion in the number and use of new psychoactive substances presents a significant analytical challenge because highly sensitive instrumentation capable of detecting a broad range of chemical compounds in real-time with a low rate of false positives is required. A Selective Reagent Ionisation-Time of Flight-Mass Spectrometry (SRI-ToF-MS) instrument is capable of meeting all of these requirements. With its high mass resolution (up to m/Δm of 8000), the application of variations in reduced electric field strength (E/N) and use of different reagent ions, the ambiguity of a nominal (monoisotopic) m/z is reduced and hence the identification of chemicals in a complex chemical environment with a high level of confidence is enabled. In this study we report the use of a SRI-ToF-MS instrument to investigate the reactions of H3O+, O2+, NO+ and Kr+ with 10 readily available (at the time of purchase) new psychoactive substances, namely 4-fluoroamphetamine, methiopropamine, ethcathinone, 4-methylethcathinone, N-ethylbuphedrone, ethylphenidate, 5-MeO-DALT, dimethocaine, 5-(2-aminopropyl)benzofuran and nitracaine. In particular, the dependence of product ion branching ratios on the reduced electric field strength for all reagent ions was investigated and is reported here. The results reported represent a significant amount of new data which will be of use for the development of drug detection techniques suitable for real world scenarios. PMID:25844048
A Deadline-Aware Scheduling and Forwarding Scheme in Wireless Sensor Networks.
Dao, Thi-Nga; Yoon, Seokhoon; Kim, Jangyoung
2016-01-05
Many applications in wireless sensor networks (WSNs) require energy consumption to be minimized and the data delivered to the sink within a specific delay. A usual solution for reducing energy consumption is duty cycling, in which nodes periodically switch between sleep and active states. By increasing the duty cycle interval, consumed energy can be reduced more. However, a large duty cycle interval causes a long end-to-end (E2E) packet delay. As a result, the requirement of a specific delay bound for packet delivery may not be satisfied. In this paper, we aim at maximizing the duty cycle while still guaranteeing that the packets arrive at the sink with the required probability, i.e., the required delay-constrained success ratio (DCSR) is achieved. In order to meet this objective, we propose a novel scheduling and forwarding scheme, namely the deadline-aware scheduling and forwarding (DASF) algorithm. In DASF, the E2E delay distribution with the given network model and parameters is estimated in order to determine the maximum duty cycle interval, with which the required DCSR is satisfied. Each node independently selects a wake-up time using the selected interval, and packets are forwarded to a node in the potential forwarding set, which is determined based on the distance between nodes and the sink. DASF does not require time synchronization between nodes, and a node does not need to maintain neighboring node information in advance. Simulation results show that the proposed scheme can satisfy a required delay-constrained success ratio and outperforms existing algorithms in terms of E2E delay and DCSR.
A Deadline-Aware Scheduling and Forwarding Scheme in Wireless Sensor Networks
Dao, Thi-Nga; Yoon, Seokhoon; Kim, Jangyoung
2016-01-01
Many applications in wireless sensor networks (WSNs) require energy consumption to be minimized and the data delivered to the sink within a specific delay. A usual solution for reducing energy consumption is duty cycling, in which nodes periodically switch between sleep and active states. By increasing the duty cycle interval, consumed energy can be reduced more. However, a large duty cycle interval causes a long end-to-end (E2E) packet delay. As a result, the requirement of a specific delay bound for packet delivery may not be satisfied. In this paper, we aim at maximizing the duty cycle while still guaranteeing that the packets arrive at the sink with the required probability, i.e., the required delay-constrained success ratio (DCSR) is achieved. In order to meet this objective, we propose a novel scheduling and forwarding scheme, namely the deadline-aware scheduling and forwarding (DASF) algorithm. In DASF, the E2E delay distribution with the given network model and parameters is estimated in order to determine the maximum duty cycle interval, with which the required DCSR is satisfied. Each node independently selects a wake-up time using the selected interval, and packets are forwarded to a node in the potential forwarding set, which is determined based on the distance between nodes and the sink. DASF does not require time synchronization between nodes, and a node does not need to maintain neighboring node information in advance. Simulation results show that the proposed scheme can satisfy a required delay-constrained success ratio and outperforms existing algorithms in terms of E2E delay and DCSR. PMID:26742046
Parallel algorithms for mapping pipelined and parallel computations
NASA Technical Reports Server (NTRS)
Nicol, David M.
1988-01-01
Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.
Darmanis, Spyridon; Toms, Andrew; Durman, Robert; Moore, Donna; Eyres, Keith
2007-07-01
To reduce the operating time in computer-assisted navigated total knee replacement (TKR), by improving communication between the infrared camera and the trackers placed on the patient. The innovation involves placing a routinely used laser pointer on top of the camera, so that the infrared cameras focus precisely on the trackers located on the knee to be operated on. A prospective randomized study was performed involving 40 patients divided into two groups, A and B. Both groups underwent navigated TKR, but for group B patients a laser pointer was used to improve the targeting capabilities of the cameras. Without the laser pointer, the camera had to move a mean 9.2 times in order to identify the trackers. With the introduction of the laser pointer, this was reduced to 0.9 times. Accordingly, the additional mean time required without the laser pointer was 11.6 minutes. Time delays are a major problem in computer-assisted surgery, and our technical suggestion can contribute towards reducing the delays associated with this particular application.
Advanced Guidance and Control Project for Reusable Launch Vehicles
NASA Technical Reports Server (NTRS)
Hanson, John M.
2000-01-01
The goals of this project are to significantly reduce the time and cost associated with guidance and control design for reusable launch vehicles, and to increase their safety and reliability. Success will lead to reduced cycle times during vehicle design and to reduced costs associated with flying to new orbits, with new payloads, and with modified vehicles. Success will also lead to more robustness to unforeseen circumstances in flight thereby enhancing safety and reducing risk. There are many guidance and control methods available that hold some promise for improvement in the desired areas. Investigators are developing a representative set of independent guidance and control methods for this project. These methods are being incorporated into a high-fidelity off is being conducted across a broad range of flight requirements. The guidance and control methods that perform the best will have demonstrated the desired qualities.
Milne, Tony G E; Vather, Ryash; O'Grady, Gregory; Miquel, Jordi; Biondo, Sebastiano; Bissett, Ian
2018-03-06
Gastrografin has been suggested as a rescue therapy for prolonged post-operative ileus (PPOI) but trial data has been inconclusive. This study aimed to determine the benefit of gastrografin use in patients with PPOI by pooling the results of two recent randomized controlled trials assessing the efficacy of gastrografin compared to placebo given at time of PPOI diagnosis. Anonymized, individual patient data from patients undergoing elective bowel resection for any indication were included, stoma closure was excluded. The primary outcome was duration of PPOI. Secondary outcomes were time to tolerate oral diet, passage of flatus/stool, requirement and duration of nasogastric tube, length of post-operative stay and rate of post-operative complications. Individual patient data were pooled for analysis (53 gastrografin, 55 placebo). Gastrografin trended towards a reduction in PPOI duration compared to placebo, respectively, median 96 h (interquartile range, IQR, 78 h) versus median 120 h (IQR, 84 h), however, this result was non-significant (P = 0.11). In addition, no significant difference was detected between the two groups for time to passage of flatus/stool (P = 0.36) and overall length of stay (P = 0.35). Gastrografin conferred a significantly faster time to tolerate an oral diet compared to placebo (median 84 h versus median 107 h, P = 0.04). There was no difference in post-operative complications between the two interventions (P > 0.05). Gastrografin did not significantly reduce PPOI duration or length of stay after abdominal surgery, but did reduce time to tolerate a solid diet. Further studies are required to clarify the role of gastrografin in PPOI. © 2018 Royal Australasian College of Surgeons.
Vidot, Helen; Teevan, Kate; Carey, Sharon; Strasser, Simone; Shackel, Nicholas
2016-03-01
To investigate the prevalence and duration of preprocedural medically ordered fasting during a period of hospitalisation in an Australian population of patients with hepatic cirrhosis or following liver transplantation and to identify potential solutions to reduce fasting times. Protein-energy malnutrition is a common finding in patients with hepatic cirrhosis and can impact significantly on survival and quality of life. Protein and energy requirements in patients with cirrhosis are higher than those of healthy individuals. A significant feature of cirrhosis is the induction of starvation metabolism following seven to eight hours of food deprivation. Many investigative and interventional procedures for patients with cirrhosis necessitate a period of fasting to comply with anaesthesia guidelines. An observational study of the fasting episodes for 34 hospitalised patients with hepatic cirrhosis or following liver transplantation. Nutritional status was estimated using subjective global assessment and handgrip strength. The prevalence and duration of fasting practices for diagnostic or investigational procedures were estimated using electronic records and patient notes. Thirty-three patients (97%) were malnourished. Twenty-two patients (65%) were fasted during the observation period. There were 43 occasions of fasting with a median fasting time of 13·5 hours. On 40 occasions fasting times exceeded the maximum six-hour guideline recommended prior to the administration of anaesthesia by the majority of Anaesthesiology Societies. The majority of procedures (77%) requiring fasting occurred after midday. Eating breakfast on the day of the procedure reduced fasting time by 45%. Medically ordered preprocedural fasting times almost always exceed existing guidelines in this nutritionally compromised group. Adherence to fasting guidelines and eating breakfast before the procedure can reduce fasting times significantly and avoid the potential induction of starvation metabolism in this nutritionally at risk group. © 2016 John Wiley & Sons Ltd.
Accelerating scientific publication in biology
Vale, Ronald D.
2015-01-01
Scientific publications enable results and ideas to be transmitted throughout the scientific community. The number and type of journal publications also have become the primary criteria used in evaluating career advancement. Our analysis suggests that publication practices have changed considerably in the life sciences over the past 30 years. More experimental data are now required for publication, and the average time required for graduate students to publish their first paper has increased and is approaching the desirable duration of PhD training. Because publication is generally a requirement for career progression, schemes to reduce the time of graduate student and postdoctoral training may be difficult to implement without also considering new mechanisms for accelerating communication of their work. The increasing time to publication also delays potential catalytic effects that ensue when many scientists have access to new information. The time has come for life scientists, funding agencies, and publishers to discuss how to communicate new findings in a way that best serves the interests of the public and the scientific community. PMID:26508643
NASA Astrophysics Data System (ADS)
Tanaka, Kiyoshi; Takano, Shuichi; Sugimura, Tatsuo
2000-10-01
In this work we focus on the indexed triangle strips that is an extended representation of triangle strips to improve the efficiency for geometrical transformation of vertices, and present a method to construct optimum indexed triangle strips using Genetic Algorithm (GA) for real-time visualization. The main objective of this work is how to optimally construct indexed triangle strips by improving the ratio that reuses the data stored in the cash memory and simultaneously reducing the total index numbers with GA. Simulation results verify that the average index numbers and cache miss ratio per polygon cold be small, and consequently the total visualization time required for the optimum solution obtained by this scheme could be remarkably reduced.
Microwash or macrowash technique to maintain a clear cornea during cataract surgery.
Amjadi, Shahriar; Roufas, Athena; Figueira, Edwin C; Bhardwaj, Gaurav; Francis, Katherine E; Masselos, Katherine; Francis, Ian C
2010-09-01
We describe a technique of irrigating and thereby rapidly and effectively clearing the cornea of relatively large amounts of surface contaminants that reduce surgical visibility and may contribute to endophthalmitis. This technique is referred to as "macrowash." If the technique is required, it is usually at the commencement of cataract surgery, immediately after placement of the surgical drape. The technique not only saves time, but also reduces the volume of irrigating solution required by the "microwash" technique, which is traditionally carried out by the scrub nurse/surgical assistant using a Rycroft cannula attached to a 15 mL container of irrigating solution. Copyright (c) 2010 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Li, Wesley W.
2009-01-01
Supporting the Aeronautics Research Mission Directorate guidelines, the National Aeronautics and Space Administration [NASA] Dryden Flight Research Center is developing a multidisciplinary design, analysis, and optimization [MDAO] tool. This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Today s modern aircraft designs in transonic speed are a challenging task due to the computation time required for the unsteady aeroelastic analysis using a Computational Fluid Dynamics [CFD] code. Design approaches in this speed regime are mainly based on the manual trial and error. Because of the time required for unsteady CFD computations in time-domain, this will considerably slow down the whole design process. These analyses are usually performed repeatedly to optimize the final design. As a result, there is considerable motivation to be able to perform aeroelastic calculations more quickly and inexpensively. This paper will describe the development of unsteady transonic aeroelastic design methodology for design optimization using reduced modeling method and unsteady aerodynamic approximation. The method requires the unsteady transonic aerodynamics be represented in the frequency or Laplace domain. Dynamically linear assumption is used for creating Aerodynamic Influence Coefficient [AIC] matrices in transonic speed regime. Unsteady CFD computations are needed for the important columns of an AIC matrix which corresponded to the primary modes for the flutter. Order reduction techniques, such as Guyan reduction and improved reduction system, are used to reduce the size of problem transonic flutter can be found by the classic methods, such as Rational function approximation, p-k, p, root-locus etc. Such a methodology could be incorporated into MDAO tool for design optimization at a reasonable computational cost. The proposed technique is verified using the Aerostructures Test Wing 2 actually designed, built, and tested at NASA Dryden Flight Research Center. The results from the full order model and the approximate reduced order model are analyzed and compared.
Intelligent routing protocol for ad hoc wireless network
NASA Astrophysics Data System (ADS)
Peng, Chaorong; Chen, Chang Wen
2006-05-01
A novel routing scheme for mobile ad hoc networks (MANETs), which combines hybrid and multi-inter-routing path properties with a distributed topology discovery route mechanism using control agents is proposed in this paper. In recent years, a variety of hybrid routing protocols for Mobile Ad hoc wireless networks (MANETs) have been developed. Which is proactively maintains routing information for a local neighborhood, while reactively acquiring routes to destinations beyond the global. The hybrid protocol reduces routing discovery latency and the end-to-end delay by providing high connectivity without requiring much of the scarce network capacity. On the other side the hybrid routing protocols in MANETs likes Zone Routing Protocol still need route "re-discover" time when a route between zones link break. Sine the topology update information needs to be broadcast routing request on local zone. Due to this delay, the routing protocol may not be applicable for real-time data and multimedia communication. We utilize the advantages of a clustering organization and multi-routing path in routing protocol to achieve several goals at the same time. Firstly, IRP efficiently saves network bandwidth and reduces route reconstruction time when a routing path fails. The IRP protocol does not require global periodic routing advertisements, local control agents will automatically monitor and repair broke links. Secondly, it efficiently reduces congestion and traffic "bottlenecks" for ClusterHeads in clustering network. Thirdly, it reduces significant overheads associated with maintaining clusters. Fourthly, it improves clusters stability due to dynamic topology changing frequently. In this paper, we present the Intelligent Routing Protocol. First, we discuss the problem of routing in ad hoc networks and the motivation of IRP. We describe the hierarchical architecture of IRP. We describe the routing process and illustrate it with an example. Further, we describe the control manage mechanisms, which are used to control active route and reduce the traffic amount in the route discovery procedure. Finial, the numerical experiments are given to show the effectiveness of IRP routing protocol.
Pietsch, M; Djahani, O; Zweiger, Ch; Plattner, F; Radl, R; Tschauner, Ch; Hofmann, S
2013-10-01
Recently, new custom-fit pin guides in total knee arthroplasty (TKA) have been introduced. Use of these guides may reduce operating time. Use of the guides combined with the absence of intramedullary alignment jigs may lead to reduced blood loss and improved early outcomes. Our aim was to evaluate blood loss and early clinical outcomes in patients undergoing minimally invasive TKA using custom-fit magnetic resonance imaging (MRI)-based pin guides. A prospective study in 80 patients was carried out. Patients were divided randomly into 2 equal groups. In one group, intramedullary alignment jigs were used. In the second group, custom-fit MRI-based pin guides were used. All patients received the same cemented posterior-stabilized implant through a mini-midvastus approach. The volume in the drain bottles was recorded after 48 h. Hb loss was estimated by subtracting the postoperative from the preoperative Hb level. Transfusion requirements and surgical time were recorded. Outcome measures were Knee Society Scores (KSS), knee flexion, knee swelling and pain. There was lower mean drainage of blood in the custom-fit group (391 ml vs. 603 ml; p < 0.0001). There was no difference in estimated loss of Hb (3.6 g/dl vs. 4.1 g/dl; n.s.) and in transfusion requirements (7.5 % vs. 10 %; n.s.). Surgical time was reduced in the custom-fit group (12 min less; p = 0.001). KSS measured at week 2, 6 and 12 showed no significant difference between groups. Knee flexion measured on days 7, 10 and at week 6, 12 and knee swelling and pain measured on days 1, 3, 10 and at week 6, 12 showed no significant difference between groups. Using custom-fit pin guides reduces blood drainage, but not the estimated Hb loss in minimally invasive TKA and does not affect transfusion rate. Surgical time is reduced. There is no effect on the early clinical outcomes. Therapeutic study, Level I.
Sigoillot, Frederic D; Huckins, Jeremy F; Li, Fuhai; Zhou, Xiaobo; Wong, Stephen T C; King, Randall W
2011-01-01
Automated time-lapse microscopy can visualize proliferation of large numbers of individual cells, enabling accurate measurement of the frequency of cell division and the duration of interphase and mitosis. However, extraction of quantitative information by manual inspection of time-lapse movies is too time-consuming to be useful for analysis of large experiments. Here we present an automated time-series approach that can measure changes in the duration of mitosis and interphase in individual cells expressing fluorescent histone 2B. The approach requires analysis of only 2 features, nuclear area and average intensity. Compared to supervised learning approaches, this method reduces processing time and does not require generation of training data sets. We demonstrate that this method is as sensitive as manual analysis in identifying small changes in interphase or mitotic duration induced by drug or siRNA treatment. This approach should facilitate automated analysis of high-throughput time-lapse data sets to identify small molecules or gene products that influence timing of cell division.
NASA Technical Reports Server (NTRS)
Parker, Peter A. (Inventor)
2003-01-01
A single vector calibration system is provided which facilitates the calibration of multi-axis load cells, including wind tunnel force balances. The single vector system provides the capability to calibrate a multi-axis load cell using a single directional load, for example loading solely in the gravitational direction. The system manipulates the load cell in three-dimensional space, while keeping the uni-directional calibration load aligned. The use of a single vector calibration load reduces the set-up time for the multi-axis load combinations needed to generate a complete calibration mathematical model. The system also reduces load application inaccuracies caused by the conventional requirement to generate multiple force vectors. The simplicity of the system reduces calibration time and cost, while simultaneously increasing calibration accuracy.
Reducing statistical uncertainties in simulated organ doses of phantoms immersed in water
Hiller, Mauritius M.; Veinot, Kenneth G.; Easterly, Clay E.; ...
2016-08-13
In this study, methods are addressed to reduce the computational time to compute organ-dose rate coefficients using Monte Carlo techniques. Several variance reduction techniques are compared including the reciprocity method, importance sampling, weight windows and the use of the ADVANTG software package. For low-energy photons, the runtime was reduced by a factor of 10 5 when using the reciprocity method for kerma computation for immersion of a phantom in contaminated water. This is particularly significant since impractically long simulation times are required to achieve reasonable statistical uncertainties in organ dose for low-energy photons in this source medium and geometry. Althoughmore » the MCNP Monte Carlo code is used in this paper, the reciprocity technique can be used equally well with other Monte Carlo codes.« less
Mass Reduction: The Weighty Challenge for Exploration Space Flight
NASA Technical Reports Server (NTRS)
Kloeris, Vickie L.
2014-01-01
Meeting nutritional and acceptability requirements is critical for the food system for an exploration class space mission. However, this must be achieved within the constraints of available resources such as water, crew time, stowage volume, launch mass and power availability. ? Due to resource constraints, exploration class missions are not expected to have refrigerators or freezers for food storage, and current per person food mass must be reduced to improve mission feasibility. ? The Packaged Food Mass Reduction Trade Study (Stoklosa, 2009) concluded that the mass of the current space food system can be effectively reduced by decreasing water content of certain foods and offering nutrient dense substitutes, such as meal replacement bars and beverages. Target nutrient ranges were established based on the nutritional content of the current breakfast and lunch meals in the ISS standard menu. A market survey of available commercial products produced no viable options for meal replacement bar or beverage products. New prototypes for both categories were formulated to meet target nutrient ranges. Samples of prototype products were packaged in high barrier packaging currently used for ISS and underwent an accelerated shelf life study at 31 degC and 41 degC (50% RH) for 24 weeks. Samples were assessed at the following time points: Initial, 6 weeks, 12 weeks, and 24 weeks. Testing at each time point included the following: color, texture, water activity, acceptability, and hexanal analysis (for food bars only). Proof of concept prototypes demonstrated that meal replacement food bars and beverages can deliver a comparable macronutrient profile while reducing the overall mass when compared to the ISS Standard Menu. Future work suggestions for meal replacement bars: Reformulation to include ingredients that reduce hardness and reduce browning to increase shelf life. Micronutrient analysis and potential fortification. Sensory evaluation studies including satiety tests and menu fatigue. Water Intake Analysis: The water in thermostabilized foods is considered as part of a crewmember's daily water intake. Extensive meal replacement would require further analyses to determine if additional water provisioning would be required per crewmember negating some of the mass savings.
Requirements and Usage of NVM in Advanced Onboard Data Processing Systems
NASA Technical Reports Server (NTRS)
Some, R.
2001-01-01
This viewgraph presentation gives an overview of the requirements and uses of non-volatile memory (NVM) in advanced onboard data processing systems. Supercomputing in space presents the only viable approach to the bandwidth problem (can't get data down to Earth), controlling constellations of cooperating satellites, reducing mission operating costs, and real-time intelligent decision making and science data gathering. Details are given on the REE vision and impact on NASA and Department of Defense missions, objectives of REE, baseline architecture, and issues. NVM uses and requirements are listed.
Integrated NTP Vehicle Radiation Design
NASA Technical Reports Server (NTRS)
Caffrey, Jarvis A.; Rodriquez, Mitchell A.
2018-01-01
The development of a nuclear thermal propulsion stage requires consideration for radiation emitted from the nuclear reactor core. Applying shielding mass is an effective mitigating solution, but a better alternative is to incorporate some mitigation strategies into the propulsion stage and crew habitat. In this way, the required additional mass is minimized and the mass that must be applied may in some cases be able to serve multiple purposes. Strategies for crew compartment shielding are discussed that reduce dose from both engine and cosmic sources, and in some cases may also serve to reduce life support risks by permitting abundant water reserves. Early consideration for integrated mitigation solutions in a crewed nuclear thermal propulsion (NTP) vehicle will enable reduced radiation burden from both cosmic and nuclear sources, improved thrust-to-weight ratio or payload capacity by reducing 'dead mass' of shielding, and generally support a more robust risk posture for a NTP-powered Mars mission by permitting shorter trip times and increased water reserves.
Integrated NTP Vehicle Radiation Design
NASA Technical Reports Server (NTRS)
Caffrey, Jarvis; Rodriquez, Mitchell
2018-01-01
The development of a nuclear thermal propulsion stage requires consideration for radiation emitted from the nuclear reactor core. Applying shielding mass is an effective mitigating solution, but a better alternative is to incorporate some mitigation strategies into the propulsion stage and crew habitat. In this way, the required additional mass is minimized and the mass that must be applied may in some cases be able to serve multiple purposes. Strategies for crew compartment shielding are discussed that reduce dose from both engine and cosmic sources, and in some cases may also serve to reduce life support risks by permitting abundant water reserves. Early consideration for integrated mitigation solutions in a crewed nuclear thermal propulsion (NTP) vehicle will enable reduced radiation burden from both cosmic and nuclear sources, improved thrust-to-weight ratio or payload capacity by reducing 'dead mass' of shielding, and generally support a more robust risk posture for a NTP-powered Mars mission by permitting shorter trip times and increased water reserves
Wavefront correction with Kalman filtering for the WFIRST-AFTA coronagraph instrument
NASA Astrophysics Data System (ADS)
Riggs, A. J. Eldorado; Kasdin, N. Jeremy; Groff, Tyler D.
2015-09-01
The only way to characterize most exoplanets spectrally is via direct imaging. For example, the Coronagraph Instrument (CGI) on the proposed Wide-Field Infrared Survey Telescope-Astrophysics Focused Telescope Assets (WFIRST-AFTA) mission plans to image and characterize several cool gas giants around nearby stars. The integration time on these faint exoplanets will be many hours to days. A crucial assumption for mission planning is that the time required to dig a dark hole (a region of high star-to-planet contrast) with deformable mirrors is small compared to science integration time. The science camera must be used as the wavefront sensor to avoid non-common path aberrations, but this approach can be quite time intensive. Several estimation images are required to build an estimate of the starlight electric field before it can be partially corrected, and this process is repeated iteratively until high contrast is reached. Here we present simulated results of batch process and recursive wavefront estimation schemes. In particular, we test a Kalman filter and an iterative extended Kalman filter (IEKF) to reduce the total exposure time and improve the robustness of wavefront correction for the WFIRST-AFTA CGI. An IEKF or other nonlinear filter also allows recursive, real-time estimation of sources incoherent with the star, such as exoplanets and disks, and may therefore reduce detection uncertainty.
Optimizing ion channel models using a parallel genetic algorithm on graphical processors.
Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon
2012-01-01
We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.
Studying relaxation phenomena via effective master equations
NASA Astrophysics Data System (ADS)
Chan, David; Wan, Jones T. K.; Chu, L. L.; Yu, K. W.
2000-04-01
The real-time dynamics of various relaxation phenomena can be conveniently formulated by a master equation with the enumeration of transition rates between given classes of conformations. To study the relaxation time towards equilibrium, it suffices to solve for the second largest eigenvalue of the resulting eigenvalue equation. Generally speaking, there is no analytic solution for the dynamic equation. Mean-field approaches generally yield misleading results while the presumably exact Monte-Carlo methods require prohibitive time steps in most real systems. In this work, we propose an exact decimation procedure for reducing the number of conformations significantly, while there is no loss of information, i.e., the reduced (or effective) equation is an exact transformed version of the original one. However, we have to pay the price: the initial Markovianity of the evolution equation is lost and the reduced equation contains memory terms in the transition rates. Since the transformed equation has significantly reduced number of degrees of freedom, the systems can readily be diagonalized by iterative means, to obtain the exact second largest eigenvalue and hence the relaxation time. The decimation method has been applied to various relaxation equations with generally desirable results. The advantages and limitations of the method will be discussed.
MOLECULAR THERMODYNAMICS IN THE DESIGN OF SUBSTITUTE SOLVENTS
The use of physical properties and fluid behavior from molecular thermodynamics can lead to better decision making in the design of substitute solvents and can greatly reduce the expense and time required to find substitutes compared to designing solvents by experiment. this pape...
Development of microsatellite markers in Parthenium ssp.
USDA-ARS?s Scientific Manuscript database
Molecular markers provide the most efficient means to study genetic diversity within and among species of a particular genus. In addition, molecular markers can facilitate breeding efforts by providing tools necessary to reduce the time required to obtain recombinant genotypes with improved agricu...
DOT National Transportation Integrated Search
1994-02-01
The airway facilities (AF) maintenance community is concerned with identifying ways of reducing both the incidence of equipment failure and the amount of time required to restore equipment to operational status following a failure. It is vitally impo...
Research notes : asphalt cement chip seals - how have they done?
DOT National Transportation Integrated Search
2002-05-01
In 1999, the Oregon Department of Transportation (ODOT) joined with Lane, Clackamas, Deschutes and Lincoln Counties to find out if asphalt cement (hot oil) chip seals would perform better and reduce the time required for traffic control after the tre...
High-power transmitter automation, part 2
NASA Technical Reports Server (NTRS)
Gregg, M. A.
1981-01-01
The current status of the transmitter automation development is reported. The work described is applicable to all transmitters in the Deep Space Network. New interface and software designs are described which improve reliability and reduce the time required for subsystem turn on and klystron saturation.
A Report on Army Science Planning and Strategy 2016
2017-06-01
Army Research Laboratory (ARL) hosted a series of meetings in fall 2016 to develop a strategic vision for Army Science. Meeting topics were vetted...reduce maturation time . • Support internal Army research efforts to enhance Army investments in multiscale modeling to accelerate the rate of...requirement are research needs including cross-modal approaches to enabling real- time human comprehension under constraints of bandwidth, information
Analysis of the glow curve of SrB 4O 7:Dy compounds employing the GOT model
NASA Astrophysics Data System (ADS)
Ortega, F.; Molina, P.; Santiago, M.; Spano, F.; Lester, M.; Caselli, E.
2006-02-01
The glow curve of SrB 4O 7:Dy phosphors has been analysed with the general one trap model (GOT). To solve the differential equation describing the GOT model a novel algorithm has been employed, which reduces significantly the deconvolution time with respect to the time required by usual integration algorithms, such as the Runge-Kutta method.
2015-08-01
11 Defense AT&L: July–August 2015 Removing Bureaucracy Katharina G. McFarland McFarland is Assistant Secretary of Defense for Acquisition. I once...involvement from all of the Service warfighting areas came together to scrub the program requirements due to concern over the “ bureaucracy ” and... Bureaucracy ” that focuses on reducing cycle time, staffing time and all forms of inefficiencies. This includes review of those burdens that Congress
An Obstacle Alerting System for Agricultural Application
NASA Technical Reports Server (NTRS)
DeMaio, Joe
2003-01-01
Wire strikes are a significant cause of helicopter accidents. The aircraft most at risk are aerial applicators. The present study examines the effectiveness of a wire alert delivered by way of the lightbar, a GPS-based guidance system for aerial application. The alert lead-time needed to avoid an invisible wire is compared with that to avoid a visible wire. A flight simulator was configured to simulate an agricultural application helicopter. Two pilots flew simulated spray runs in fields with visible wires, invisible wires, and no wires. The wire alert was effective in reducing wire strikes. A lead-time of 3.5 sec was required for the alert to be effective. The lead- time required was the same whether the pilot could see the wire or not.
A High Power Solar Electric Propulsion - Chemical Mission for Human Exploration of Mars
NASA Technical Reports Server (NTRS)
Burke, Laura M.; Martini, Michael C.; Oleson, Steven R.
2014-01-01
Recently Solar Electric Propulsion (SEP) as a main propulsion system has been investigated as an option to support manned space missions to near-Earth destinations for the NASA Gateway spacecraft. High efficiency SEP systems are able to reduce the amount of propellant long duration chemical missions require, ultimately reducing the required mass delivered to Low Earth Orbit (LEO) by a launch vehicle. However, for long duration interplanetary Mars missions, using SEP as the sole propulsion source alone may not be feasible due to the long trip times to reach and insert into the destination orbit. By combining an SEP propulsion system with a chemical propulsion system the mission is able to utilize the high-efficiency SEP for sustained vehicle acceleration and deceleration in heliocentric space and the chemical system for orbit insertion maneuvers and trans-earth injection, eliminating the need for long duration spirals. By capturing chemically instead of with low-thrust SEP, Mars stay time increases by nearly 200 days. Additionally, the size the of chemical propulsion system can be significantly reduced from that of a standard Mars mission because the SEP system greatly decreases the Mars arrival and departure hyperbolic excess velocities (V(sub infinity)).
Additive Manufacturing of Tooling for Refrigeration Cabinet Foaming Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Post, Brian K; Nuttall, David; Cukier, Michael
The primary objective of this project was to leverage the Big Area Additive Manufacturing (BAAM) process and materials into a long term, quick change tooling concept to drastically reduce product lead and development timelines and costs. Current refrigeration foam molds are complicated to manufacture involving casting several aluminum parts in an approximate shape, machining components of the molds and post fitting and shimming of the parts in an articulated fixture. The total process timeline can take over 6 months. The foaming process is slower than required for production, therefore multiple fixtures, 10 to 27, are required per refrigerator model. Moldsmore » are particular to a specific product configuration making mixed model assembly challenging for sequencing, mold changes or auto changeover features. The initial goal was to create a tool leveraging the ORNL materials and additive process to build a tool in 4 to 6 weeks or less. A secondary goal was to create common fixture cores and provide lightweight fixture sections that could be revised in a very short time to increase equipment flexibility reduce lead times, lower the barriers to first production trials, and reduce tooling costs.« less
Determination of Time Required for Materials Exposed to Oxygen to Return to Reduced Flammability
NASA Technical Reports Server (NTRS)
Harper, Susana; Hirsch, David; Smith, Sarah
2009-01-01
Increased material flammability due to exposure to high oxygen concentrations is a concern from both a safety and operational perspective. Localized, high oxygen concentrations can occur when exiting a higher oxygen concentration environment due to material saturation, as well as oxygen entrapment between barrier materials. Understanding of oxygen diffusion and permeation and its correlation to flammability risks can reduce the likelihood of fires while improving procedures as NASA moves to longer missions with increased extravehicular activities in both spacecraft and off-Earth habitats. This paper examines the time required for common spacecraft materials exposed to oxygen to return to reduced flammability after removal from the increased oxygen concentration environment. Specifically, NASA-STD-6001A maximum oxygen concentration testing and ASTM F-1927 permeability testing were performed on Nomex 4 HT90-40, Tiburon 5 Surgical Drape, Cotton, Extravehicular Mobility Unit (EMU) Liquid-Cooled Ventilation Garment, EMU Thermal Comfort Undergarment, EMU Mosite Foam with Spandex Covering, Advanced Crew Escape Suit (ACES) Outer Cross-section, ACES Liquid Cooled Garment (LCG), ACES O2 Hose Material, Minicel 6 Polyethylene Foam, Minicel Polyethylene Foam with Nomex Covering, Pyrell Polyurethane Foam, and Zotek 7 F-30 Foam.
Demiroglu-Zergeroglu, Asuman; Candemir, Gulsife; Turhanlar, Ebru; Sagir, Fatma; Ayvali, Nurettin
2016-12-01
The unrestrained EGFR signalling contributes to malignant phenotype in a number of cancers including Malignant Mesotheliomas. Present study was designed to evaluate EGFR-dependent anti-proliferative and apoptotic effects of Gallic acid in transformed Mesothelial (MeT-5A) and Malignant Mesothelioma (SPC212) cells. Gallic acid reduced the viability of Malignant Mesothelioma cells in a concentration and time-dependent manner. However, viability of mesothelial cells reduced only at high concentration and longer time periods. Gallic acid restrained the activation of EGFR, ERK1/2 and AKT proteins and down regulated expression of Cyclin D and Bcl-2 genes, but upregulated the expression of p21 gene in EGF-induced SPC212 cells. GA-induced transitory G1 arrest and triggered mitochondrial and death receptor mediated apoptosis, which requires p38MAPK activation. The data provided here indicate that GA is able to inhibit EGFR dependent proliferation and survival signals and induces p38 pathway dependent apoptosis in Malignant Mesothelioma cells. On the basis of these experimental findings it is worthwhile to investigate further the biological activity of Gallic acid on other Mesothelioma cell lines harbouring aberrant EGFR signals. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Kesuma, Hendra; Niederkleine, Kris; Schmale, Sebastian; Ahobala, Tejas; Paul, Steffen; Sebald, Johannes
2016-08-01
In this work we design and implement efficient time synchronization/stamping method for Wireless Sensor Network inside the Vehicle Equipment Bay (VEB) of the ARIANE 5. The sensor nodes in the network do not require real time clock (RTC) hardware to store and stamp each measurement data performed by the sensors. There will be only the measurement sequence information, previous time (clock) information, measurement data and its related data protocol information sent back to the Access Point (AP). This lead to less data transmission, less energy and less time required by the sensor nodes to operate and also leads to longer battery life time. The Visible Light Communication (VLC) is used, to provide energy, to synchronize time and to deliver the commands to the sensor nodes in the network. By employing star network topology, a part of solar cell as receiver, the conventional receiver (RF/Infrared) is neglected to reduce amount of hardware and energy consumption. The infrared transmitter on the sensor node is deployed to minimize the electromagnetic interference in the launcher and does not require a complicated circuit in comparison to a RF transmitter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry
The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce themore » required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.« less
Sobolev, Boris; Levy, Adrian; Hayden, Robert; Kuramoto, Lisa
2006-01-01
Objective To determine whether the probability of undergoing coronary bypass surgery within a certain time was related to the number of patients on the wait list at registration for the operation in a publicly funded health system. Methods A prospective cohort study comparing waiting times among patients registered on wait lists at the hospitals delivering adult cardiac surgery. For each calendar week, the list size, the number of new registrations, and the number of direct admissions immediately after angiography characterized the demand for surgery. Results The length of delay in undergoing treatment was associated with list size at registration, with shorter times for shorter lists (log-rank test 1,198.3, p<.0001). When the list size at registration required clearance time over 1 week patients had 42 percent lower odds of undergoing surgery compared with lists with clearance time less than 1 week (odds ratio [OR] 0.58 percent, 95 percent, confidence interval [CI] 0.53–0.63), after adjustment for age, sex, comorbidity, period, and hospital. The weekly number of new registrations exceeding weekly service capacity had an independent effect toward longer service delays when the list size at registration required clearance time less than 1 week (OR 0.56 percent, 95 percent CI 0.45–0.71), but not for longer lists. Every time the operation was performed for a patient requiring surgery without registration on wait lists, the odds of surgery for listed patients were reduced by 6 percent (OR 0.94, CI 0.93–0.95). Conclusion For wait-listed patients, time to surgery depends on the list size at registration, the number of new registrations, as well as on the weekly number of patients who move immediately from angiography to coronary bypass surgery without being registered on a wait list. Hospital managers may use these findings to improve resource planning and to reduce uncertainty when providing advice on expected treatment delays. PMID:16430599
Alternative nuclear technologies
NASA Astrophysics Data System (ADS)
Schubert, E.
1981-10-01
The lead times required to develop a select group of nuclear fission reactor types and fuel cycles to the point of readiness for full commercialization are compared. Along with lead times, fuel material requirements and comparative costs of producing electric power were estimated. A conservative approach and consistent criteria for all systems were used in estimates of the steps required and the times involved in developing each technology. The impact of the inevitable exhaustion of the low- or reasonable-cost uranium reserves in the United States on the desirability of completing the breeder reactor program, with its favorable long-term result on fission fuel supplies, is discussed. The long times projected to bring the most advanced alternative converter reactor technologies the heavy water reactor and the high-temperature gas-cooled reactor into commercial deployment when compared to the time projected to bring the breeder reactor into equivalent status suggest that the country's best choice is to develop the breeder. The perceived diversion-proliferation problems with the uranium plutonium fuel cycle have workable solutions that can be developed which will enable the use of those materials at substantially reduced levels of diversion risk.
Elimination of water pathogens with solar radiation using an automated sequential batch CPC reactor.
Polo-López, M I; Fernández-Ibáñez, P; Ubomba-Jaswa, E; Navntoft, C; García-Fernández, I; Dunlop, P S M; Schmid, M; Byrne, J A; McGuigan, K G
2011-11-30
Solar disinfection (SODIS) of water is a well-known, effective treatment process which is practiced at household level in many developing countries. However, this process is limited by the small volume treated and there is no indication of treatment efficacy for the user. Low cost glass tube reactors, together with compound parabolic collector (CPC) technology, have been shown to significantly increase the efficiency of solar disinfection. However, these reactors still require user input to control each batch SODIS process and there is no feedback that the process is complete. Automatic operation of the batch SODIS process, controlled by UVA-radiation sensors, can provide information on the status of the process, can ensure the required UVA dose to achieve complete disinfection is received and reduces user work-load through automatic sequential batch processing. In this work, an enhanced CPC photo-reactor with a concentration factor of 1.89 was developed. The apparatus was automated to achieve exposure to a pre-determined UVA dose. Treated water was automatically dispensed into a reservoir tank. The reactor was tested using Escherichia coli as a model pathogen in natural well water. A 6-log inactivation of E. coli was achieved following exposure to the minimum uninterrupted lethal UVA dose. The enhanced reactor decreased the exposure time required to achieve the lethal UVA dose, in comparison to a CPC system with a concentration factor of 1.0. Doubling the lethal UVA dose prevented the need for a period of post-exposure dark inactivation and reduced the overall treatment time. Using this reactor, SODIS can be automatically carried out at an affordable cost, with reduced exposure time and minimal user input. Copyright © 2011 Elsevier B.V. All rights reserved.
Turtle: identifying frequent k-mers with cache-efficient algorithms.
Roy, Rajat Shuvro; Bhattacharya, Debashish; Schliep, Alexander
2014-07-15
Counting the frequencies of k-mers in read libraries is often a first step in the analysis of high-throughput sequencing data. Infrequent k-mers are assumed to be a result of sequencing errors. The frequent k-mers constitute a reduced but error-free representation of the experiment, which can inform read error correction or serve as the input to de novo assembly methods. Ideally, the memory requirement for counting should be linear in the number of frequent k-mers and not in the, typically much larger, total number of k-mers in the read library. We present a novel method that balances time, space and accuracy requirements to efficiently extract frequent k-mers even for high-coverage libraries and large genomes such as human. Our method is designed to minimize cache misses in a cache-efficient manner by using a pattern-blocked Bloom filter to remove infrequent k-mers from consideration in combination with a novel sort-and-compact scheme, instead of a hash, for the actual counting. Although this increases theoretical complexity, the savings in cache misses reduce the empirical running times. A variant of method can resort to a counting Bloom filter for even larger savings in memory at the expense of false-negative rates in addition to the false-positive rates common to all Bloom filter-based approaches. A comparison with the state-of-the-art shows reduced memory requirements and running times. The tools are freely available for download at http://bioinformatics.rutgers.edu/Software/Turtle and http://figshare.com/articles/Turtle/791582. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindley, Benjamin A.; Parks, Geoffrey T.; Franceschini, Fausto
Multiple recycle of long-lived actinides has the potential to greatly reduce the required storage time for spent nuclear fuel or high level nuclear waste. This is generally thought to require fast reactors as most transuranic (TRU) isotopes have low fission probabilities in thermal reactors. Reduced-moderation LWRs are a potential alternative to fast reactors with reduced time to deployment as they are based on commercially mature LWR technology. Thorium (Th) fuel is neutronically advantageous for TRU multiple recycle in LWRs due to a large improvement in the void coefficient. If Th fuel is used in reduced-moderation LWRs, it appears neutronically feasiblemore » to achieve full actinide recycle while burning an external supply of TRU, with related potential improvements in waste management and fuel utilization. In this paper, the fuel cycle of TRU-bearing Th fuel is analysed for reduced-moderation PWRs and BWRs (RMPWRs and RBWRs). RMPWRs have the advantage of relatively rapid implementation and intrinsically low conversion ratios. However, it is challenging to simultaneously satisfy operational and fuel cycle constraints. An RBWR may potentially take longer to implement than an RMPWR due to more extensive changes from current BWR technology. However, the harder neutron spectrum can lead to favourable fuel cycle performance. A two-stage fuel cycle, where the first pass is Th-Pu MOX, is a technically reasonable implementation of either concept. The first stage of the fuel cycle can therefore be implemented at relatively low cost as a Pu disposal option, with a further policy option of full recycle in the medium term. (authors)« less
Ultrafast detection in particle physics and positron emission tomography using SiPMs
NASA Astrophysics Data System (ADS)
Dolenec, R.; Korpar, S.; Križan, P.; Pestotnik, R.
2017-12-01
Silicon photomultiplier (SiPM) photodetectors perform well in many particle and medical physics applications, especially where good efficiency, insensitivity to magnetic field and precise timing are required. In Cherenkov time-of-flight positron emission tomography the requirements for photodetector performance are especially high. On average only a couple of photons are available for detection and the best possible timing resolution is needed. Using SiPMs as photodetectors enables good detection efficiency, but the large sensitive area devices needed have somewhat limited time resolution for single photons. We have observed an additional degradation of the timing at very low light intensities due to delayed events in distribution of signals resulting from multiple fired micro cells. In this work we present the timing properties of AdvanSiD ASD-NUV3S-P-40 SiPM at single photon level picosecond laser illumination and a simple modification of the time-walk correction algorithm, that resulted in reduced degradation of timing resolution due to the delayed events.
Chernew, Michael E
2013-05-01
Policy makers have considerable interest in reducing Medicare spending growth. Clarity in the debate on reducing Medicare spending growth requires recognition of three important distinctions: the difference between public and total spending on health, the difference between the level of health spending and rate of health spending growth, and the difference between growth per beneficiary and growth in the number of beneficiaries in Medicare. The primary policy issue facing the US health care system is the rate of spending growth in public programs, and solving that problem will probably require reforms to the entire health care sector. The Affordable Care Act created a projected trajectory for Medicare spending per beneficiary that is lower than historical growth rates. Although opportunities for one-time savings exist, any long-term savings from Medicare, beyond those already forecast, will probably require a shift in spending from taxpayers to beneficiaries via higher beneficiary premium contributions (overall or via means testing), changes in eligibility, or greater cost sharing at the point of service.
Fevre, Marie-Cécile; Vincent, Caroline; Picard, Julien; Vighetti, Arnaud; Chapuis, Claire; Detavernier, Maxime; Allenet, Benoît; Payen, Jean-François; Bosson, Jean-Luc; Albaladejo, Pierre
2018-02-01
Ultrasound (US) guided needle positioning is safer than anatomical landmark techniques for central venous access. Hand-eye coordination and execution time depend on the professional's ability, previous training and personal skills. Needle guidance positioning systems (GPS) may theoretically reduce execution time and facilitate needle positioning in specific targets, thus improving patient comfort and safety. Three groups of healthcare professionals (41 anaesthesiologists and intensivists, 41 residents in anaesthesiology and intensive care, 39 nurse anaesthetists) were included and required to perform 3 tasks (positioning the tip of a needle in three different targets in a silicon phantom) by using successively a conventional US-guided needle positioning and a needle GPS. We measured execution times to perform the tasks, hand-eye coordination and the number of repositioning occurrences or errors in handling the needle or the probe. Without the GPS system, we observed a significant inter-individual difference for execution time (P<0.05), hand-eye coordination and the number of errors/needle repositioning between physicians, residents and nurse anaesthetists. US training and video gaming were found to be independent factors associated with a shorter execution time. Use of GPS attenuated the inter-individual and group variability. We observed a reduced execution time and improved hand-eye coordination in all groups as compared to US without GPS. Neither US training, video gaming nor demographic personal or professional factors were found to be significantly associated with reduced execution time when GPS was used. US associated with GPS systems may improve safety and decrease execution time by reducing inter-individual variability between professionals for needle-handling procedures. Copyright © 2016 Société française d'anesthésie et de réanimation (Sfar). Published by Elsevier Masson SAS. All rights reserved.
NASA Technical Reports Server (NTRS)
Jaap, John; Muery, Kim
2000-01-01
Scheduling engines are found at the core of software systems that plan and schedule activities and resources. A Request-Oriented Scheduling Engine (ROSE) is one that processes a single request (adding a task to a timeline) and then waits for another request. For the International Space Station, a robust ROSE-based system would support multiple, simultaneous users, each formulating requests (defining scheduling requirements), submitting these requests via the internet to a single scheduling engine operating on a single timeline, and immediately viewing the resulting timeline. ROSE is significantly different from the engine currently used to schedule Space Station operations. The current engine supports essentially one person at a time, with a pre-defined set of requirements from many payloads, working in either a "batch" scheduling mode or an interactive/manual scheduling mode. A planning and scheduling process that takes advantage of the features of ROSE could produce greater customer satisfaction at reduced cost and reduced flow time. This paper describes a possible ROSE-based scheduling process and identifies the additional software component required to support it. Resulting changes to the management and control of the process are also discussed.
Ravazzani, Giovanni; Ghilardi, Matteo; Mendlik, Thomas; Gobiet, Andreas; Corbari, Chiara; Mancini, Marco
2014-01-01
Assessing the future effects of climate change on water availability requires an understanding of how precipitation and evapotranspiration rates will respond to changes in atmospheric forcing. Use of simplified hydrological models is required beacause of lack of meteorological forcings with the high space and time resolutions required to model hydrological processes in mountains river basins, and the necessity of reducing the computational costs. The main objective of this study was to quantify the differences between a simplified hydrological model, which uses only precipitation and temperature to compute the hydrological balance when simulating the impact of climate change, and an enhanced version of the model, which solves the energy balance to compute the actual evapotranspiration. For the meteorological forcing of future scenario, at-site bias-corrected time series based on two regional climate models were used. A quantile-based error-correction approach was used to downscale the regional climate model simulations to a point scale and to reduce its error characteristics. The study shows that a simple temperature-based approach for computing the evapotranspiration is sufficiently accurate for performing hydrological impact investigations of climate change for the Alpine river basin which was studied. PMID:25285917
Ravazzani, Giovanni; Ghilardi, Matteo; Mendlik, Thomas; Gobiet, Andreas; Corbari, Chiara; Mancini, Marco
2014-01-01
Assessing the future effects of climate change on water availability requires an understanding of how precipitation and evapotranspiration rates will respond to changes in atmospheric forcing. Use of simplified hydrological models is required because of lack of meteorological forcings with the high space and time resolutions required to model hydrological processes in mountains river basins, and the necessity of reducing the computational costs. The main objective of this study was to quantify the differences between a simplified hydrological model, which uses only precipitation and temperature to compute the hydrological balance when simulating the impact of climate change, and an enhanced version of the model, which solves the energy balance to compute the actual evapotranspiration. For the meteorological forcing of future scenario, at-site bias-corrected time series based on two regional climate models were used. A quantile-based error-correction approach was used to downscale the regional climate model simulations to a point scale and to reduce its error characteristics. The study shows that a simple temperature-based approach for computing the evapotranspiration is sufficiently accurate for performing hydrological impact investigations of climate change for the Alpine river basin which was studied.
Designing a Multi-Petabyte Database for LSST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becla, Jacek; Hanushevsky, Andrew; Nikolaev, Sergei
2007-01-10
The 3.2 giga-pixel LSST camera will produce approximately half a petabyte of archive images every month. These data need to be reduced in under a minute to produce real-time transient alerts, and then added to the cumulative catalog for further analysis. The catalog is expected to grow about three hundred terabytes per year. The data volume, the real-time transient alerting requirements of the LSST, and its spatio-temporal aspects require innovative techniques to build an efficient data access system at reasonable cost. As currently envisioned, the system will rely on a database for catalogs and metadata. Several database systems are beingmore » evaluated to understand how they perform at these data rates, data volumes, and access patterns. This paper describes the LSST requirements, the challenges they impose, the data access philosophy, results to date from evaluating available database technologies against LSST requirements, and the proposed database architecture to meet the data challenges.« less
Menzies, Kevin
2014-08-13
The growth in simulation capability over the past 20 years has led to remarkable changes in the design process for gas turbines. The availability of relatively cheap computational power coupled to improvements in numerical methods and physical modelling in simulation codes have enabled the development of aircraft propulsion systems that are more powerful and yet more efficient than ever before. However, the design challenges are correspondingly greater, especially to reduce environmental impact. The simulation requirements to achieve a reduced environmental impact are described along with the implications of continued growth in available computational power. It is concluded that achieving the environmental goals will demand large-scale multi-disciplinary simulations requiring significantly increased computational power, to enable optimization of the airframe and propulsion system over the entire operational envelope. However even with massive parallelization, the limits imposed by communications latency will constrain the time required to achieve a solution, and therefore the position of such large-scale calculations in the industrial design process. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Cationic peptide exposure enhances pulsed-electric-field-mediated membrane disruption.
Kennedy, Stephen M; Aiken, Erik J; Beres, Kaytlyn A; Hahn, Adam R; Kamin, Samantha J; Hagness, Susan C; Booske, John H; Murphy, William L
2014-01-01
The use of pulsed electric fields (PEFs) to irreversibly electroporate cells is a promising approach for destroying undesirable cells. This approach may gain enhanced applicability if the intensity of the PEF required to electrically disrupt cell membranes can be reduced via exposure to a molecular deliverable. This will be particularly impactful if that reduced PEF minimally influences cells that are not exposed to the deliverable. We hypothesized that the introduction of charged molecules to the cell surfaces would create regions of enhanced transmembrane electric potential in the vicinity of each charged molecule, thereby lowering the PEF intensity required to disrupt the plasma membranes. This study will therefore examine if exposure to cationic peptides can enhance a PEF's ability to disrupt plasma membranes. We exposed leukemia cells to 40 μs PEFs in media containing varying concentrations of a cationic peptide, polyarginine. We observed the internalization of a membrane integrity indicator, propidium iodide (PI), in real time. Based on an individual cell's PI fluorescence versus time signature, we were able to determine the relative degree of membrane disruption. When using 1-2 kV/cm, exposure to >50 μg/ml of polyarginine resulted in immediate and high levels of PI uptake, indicating severe membrane disruption, whereas in the absence of peptide, cells predominantly exhibited signatures indicative of no membrane disruption. Additionally, PI entered cells through the anode-facing membrane when exposed to cationic peptide, which was theoretically expected. Exposure to cationic peptides reduced the PEF intensity required to induce rapid and irreversible membrane disruption. Critically, peptide exposure reduced the PEF intensities required to elicit irreversible membrane disruption at normally sub-electroporation intensities. We believe that these cationic peptides, when coupled with current advancements in cell targeting techniques will be useful tools in applications where targeted destruction of unwanted cell populations is desired.
Cationic Peptide Exposure Enhances Pulsed-Electric-Field-Mediated Membrane Disruption
Kennedy, Stephen M.; Aiken, Erik J.; Beres, Kaytlyn A.; Hahn, Adam R.; Kamin, Samantha J.; Hagness, Susan C.; Booske, John H.; Murphy, William L.
2014-01-01
Background The use of pulsed electric fields (PEFs) to irreversibly electroporate cells is a promising approach for destroying undesirable cells. This approach may gain enhanced applicability if the intensity of the PEF required to electrically disrupt cell membranes can be reduced via exposure to a molecular deliverable. This will be particularly impactful if that reduced PEF minimally influences cells that are not exposed to the deliverable. We hypothesized that the introduction of charged molecules to the cell surfaces would create regions of enhanced transmembrane electric potential in the vicinity of each charged molecule, thereby lowering the PEF intensity required to disrupt the plasma membranes. This study will therefore examine if exposure to cationic peptides can enhance a PEF’s ability to disrupt plasma membranes. Methodology/Principal Findings We exposed leukemia cells to 40 μs PEFs in media containing varying concentrations of a cationic peptide, polyarginine. We observed the internalization of a membrane integrity indicator, propidium iodide (PI), in real time. Based on an individual cell’s PI fluorescence versus time signature, we were able to determine the relative degree of membrane disruption. When using 1–2 kV/cm, exposure to >50 μg/ml of polyarginine resulted in immediate and high levels of PI uptake, indicating severe membrane disruption, whereas in the absence of peptide, cells predominantly exhibited signatures indicative of no membrane disruption. Additionally, PI entered cells through the anode-facing membrane when exposed to cationic peptide, which was theoretically expected. Conclusions/Significance Exposure to cationic peptides reduced the PEF intensity required to induce rapid and irreversible membrane disruption. Critically, peptide exposure reduced the PEF intensities required to elicit irreversible membrane disruption at normally sub-electroporation intensities. We believe that these cationic peptides, when coupled with current advancements in cell targeting techniques will be useful tools in applications where targeted destruction of unwanted cell populations is desired. PMID:24671150
Characteristic analysis and simulation for polysilicon comb micro-accelerometer
NASA Astrophysics Data System (ADS)
Liu, Fengli; Hao, Yongping
2008-10-01
High force update rate is a key factor for achieving high performance haptic rendering, which imposes a stringent real time requirement upon the execution environment of the haptic system. This requirement confines the haptic system to simplified environment for reducing the computation cost of haptic rendering algorithms. In this paper, we present a novel "hyper-threading" architecture consisting of several threads for haptic rendering. The high force update rate is achieved with relatively large computation time interval for each haptic loop. The proposed method was testified and proved to be effective with experiments on virtual wall prototype haptic system via Delta Haptic Device.
Amor, Rumelo; McDonald, Alison; Trägårdh, Johanna; Robb, Gillian; Wilson, Louise; Abdul Rahman, Nor Zaihana; Dempster, John; Amos, William Bradshaw; Bushell, Trevor J.; McConnell, Gail
2016-01-01
We demonstrate fluorescence imaging by two-photon excitation without scanning in biological specimens as previously described by Hwang and co-workers, but with an increased field size and with framing rates of up to 100 Hz. During recordings of synaptically-driven Ca2+ events in primary rat hippocampal neurone cultures loaded with the fluorescent Ca2+ indicator Fluo-4 AM, we have observed greatly reduced photo-bleaching in comparison with single-photon excitation. This method, which requires no costly additions to the microscope, promises to be useful for work where high time-resolution is required. PMID:26824845
Rapid prototyping and AI programming environments applied to payload modeling
NASA Technical Reports Server (NTRS)
Carnahan, Richard S., Jr.; Mendler, Andrew P.
1987-01-01
This effort focused on using artificial intelligence (AI) programming environments and rapid prototyping to aid in both space flight manned and unmanned payload simulation and training. Significant problems addressed are the large amount of development time required to design and implement just one of these payload simulations and the relative inflexibility of the resulting model to accepting future modification. Results of this effort have suggested that both rapid prototyping and AI programming environments can significantly reduce development time and cost when applied to the domain of payload modeling for crew training. The techniques employed are applicable to a variety of domains where models or simulations are required.
Improving the Aircraft Design Process Using Web-Based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)
2000-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
Improving the Aircraft Design Process Using Web-based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.
2003-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
NASA Technical Reports Server (NTRS)
Tuccillo, J. J.
1984-01-01
Numerical Weather Prediction (NWP), for both operational and research purposes, requires only fast computational speed but also large memory. A technique for solving the Primitive Equations for atmospheric motion on the CYBER 205, as implemented in the Mesoscale Atmospheric Simulation System, which is fully vectorized and requires substantially less memory than other techniques such as the Leapfrog or Adams-Bashforth Schemes is discussed. The technique presented uses the Euler-Backard time marching scheme. Also discussed are several techniques for reducing computational time of the model by replacing slow intrinsic routines by faster algorithms which use only hardware vector instructions.
The effect of atmospheric drag on the design of solar-cell power systems for low Earth orbit
NASA Technical Reports Server (NTRS)
Kyser, A. C.
1983-01-01
The feasibility of reducing the atmospheric drag of low orbit solar powered satellites by operating the solar-cell array in a minimum-drag attitude, rather than in the conventional Sun pointing attitude was determined. The weights of the solar array, the energy storage batteries, and the fuel required to overcome the drag of the solar array for a range of design life times in orbit were considered. The drag of the array was estimated by free molecule flow theory, and the system weights were calculated from unit weight estimates for 1990 technology. The trailing, minimum drag system was found to require 80% more solar array area, and 30% more battery capacity, the system weights for reasonable life times were dominated by the thruster fuel requirements.
Technology requirements for communication satellites in the 1980's
NASA Technical Reports Server (NTRS)
Burtt, J. E.; Moe, C. R.; Elms, R. V.; Delateur, L. A.; Sedlacek, W. C.; Younger, G. G.
1973-01-01
The key technology requirements are defined for meeting the forecasted demands for communication satellite services in the 1985 to 1995 time frame. Evaluation is made of needs for services and technical and functional requirements for providing services. The future growth capabilities of the terrestrial telephone network, cable television, and satellite networks are forecasted. The impact of spacecraft technology and booster performance and costs upon communication satellite costs are analyzed. Systems analysis techniques are used to determine functional requirements and the sensitivities of technology improvements for reducing the costs of meeting requirements. Recommended development plans and funding levels are presented, as well as the possible cost saving for communications satellites in the post 1985 era.
Reduced-Order Aerothermoelastic Analysis of Hypersonic Vehicle Structures
NASA Astrophysics Data System (ADS)
Falkiewicz, Nathan J.
Design and simulation of hypersonic vehicles require consideration of a variety of disciplines due to the highly coupled nature of the flight regime. In order to capture all of the potential effects on vehicle dynamics, one must consider the aerodynamics, aerodynamic heating, heat transfer, and structural dynamics as well as the interactions between these disciplines. The problem is further complicated by the large computational expense involved in capturing all of these effects and their interactions in a full-order sense. While high-fidelity modeling techniques exist for each of these disciplines, the use of such techniques is computationally infeasible in a vehicle design and control system simulation setting for such a highly coupled problem. Early in the design stage, many iterations of analyses may need to be carried out as the vehicle design matures, thus requiring quick analysis turnaround time. Additionally, the number of states used in the analyses must be small enough to allow for efficient control simulation and design. As a result, alternatives to full-order models must be considered. This dissertation presents a fully coupled, reduced-order aerothermoelastic framework for the modeling and analysis of hypersonic vehicle structures. The reduced-order transient thermal solution is a modal solution based on the proper orthogonal decomposition. The reduced-order structural dynamic model is based on projection of the equations of motion onto a Ritz modal subspace that is identified a priori. The reduced-order models are assembled into a time-domain aerothermoelastic simulation framework which uses a partitioned time-marching scheme to account for the disparate time scales of the associated physics. The aerothermoelastic modeling framework is outlined and the formulations associated with the unsteady aerodynamics, aerodynamic heating, transient thermal, and structural dynamics are outlined. Results demonstrate the accuracy of the reduced-order transient thermal and structural dynamic models under variation in boundary conditions and flight conditions. The framework is applied to representative hypersonic vehicle control surface structures and a variety of studies are conducted to assess the impact of aerothermoelastic effects on hypersonic vehicle dynamics. The results presented in this dissertation demonstrate the ability of the proposed framework to perform efficient aerothermoelastic analysis.
NASA Technical Reports Server (NTRS)
Jaap, John; Meyer, Patrick; Davis, Elizabeth
1997-01-01
The experiments planned for the International Space Station promise to be complex, lengthy and diverse. The scarcity of the space station resources will cause significant competition for resources between experiments. The scheduling job facing the Space Station mission planning software requires a concise and comprehensive description of the experiments' requirements (to ensure a valid schedule) and a good description of the experiments' flexibility (to effectively utilize available resources). In addition, the continuous operation of the station, the wide geographic dispersion of station users, and the budgetary pressure to reduce operations manpower make a low-cost solution mandatory. A graphical representation of the scheduling requirements for station payloads implemented via an Internet-based application promises to be an elegant solution that addresses all of these issues. The graphical representation of experiment requirements permits a station user to describe his experiment by defining "activities" and "sequences of activities". Activities define the resource requirements (with alternatives) and other quantitative constraints of tasks to be performed. Activities definitions use an "outline" graphics paradigm. Sequences define the time relationships between activities. Sequences may also define time relationships with activities of other payloads or space station systems. Sequences of activities are described by a "network" graphics paradigm. The bulk of this paper will describe the graphical approach to representing requirements and provide examples that show the ease and clarity with which complex requirements can be represented. A Java applet, to run in a web browser, is being developed to support the graphical representation of payload scheduling requirements. Implementing the entry and editing of requirements via the web solves the problems introduced by the geographic dispersion of users. Reducing manpower is accomplished by developing a concise representation which eliminates the misunderstanding possible with verbose representations and which captures the complete requirements and flexibility of the experiments.
Cinetica de oxidacion de polimeros conductores: poli-3,4- etilendioxitiofeno
NASA Astrophysics Data System (ADS)
Caballero Romero, Maria
Films of poly-3,4-ethylenedioxythiophene (PEDOT) perchlorate used as electrodes in liquid electrolytes incorporate anions and solvent during oxidation for charge and osmotic balance: the film swells. During reduction the film shrinks, closes its structure trapping counterions getting then rising conformational packed states by expulsion of counterions and solvent. Here by potential step from the same reduced initial state to the same oxidized final state the rate coefficient, the activation energy and reaction orders related to the counterion concentration in solution and to the concentration of active centers in the polymer film, were attained following the usual methodology used for chemical and electrochemical kinetics. Now the full methodology was repeated using different reduced-shrunk or reduced-conformational compacted initial states every time. Those initial states were attained by reduction of the oxidized film at rising cathodic potentials for the same reduction time each. Rising reduced and conformational compacted states give slower subsequent oxidation rates by potential step to the same anodic potential every time. The activation energy, the reaction coefficient and reaction orders change for rising conformational compacted initial states. Decreasing rate constants and increasing activation energies are obtained for the PEDOT oxidation from increasing conformational compacted initial states. The experimental activation energy presents two linear ranges as a function of the initial reduced-compacted state. Using as initial states for the oxidation open structures attained by reduction at low cathodic potentials, activation energies attained were constant: namely the chemical activation energy. Using as initial states for the oxidation deeper reduced, closed and packed conformational structures, the activation energy includes two components: the constant chemical energy plus the conformational energy required to relax the conformational structure generating free volume which allows the entrance of the balancing counterions required for the reaction. The conformational energy increases linearly as a function of the reduction-compaction potential. The kinetic magnitudes include conformational and structural information. The Chemical Kinetics becomes Structural (or conformational) Chemical Kinetics.
High-power transmitter automation. [deep space network
NASA Technical Reports Server (NTRS)
Gosline, R.
1980-01-01
The current status of the transmitter automation development applicable to all transmitters in the deep space network is described. Interface and software designs are described that improve reliability and reduce the time required for subsystem turn-on and klystron saturation to less than 10 minutes.
Verdant: automated annotation, alignment and phylogenetic analysis of whole chloroplast genomes.
McKain, Michael R; Hartsock, Ryan H; Wohl, Molly M; Kellogg, Elizabeth A
2017-01-01
Chloroplast genomes are now produced in the hundreds for angiosperm phylogenetics projects, but current methods for annotation, alignment and tree estimation still require some manual intervention reducing throughput and increasing analysis time for large chloroplast systematics projects. Verdant is a web-based software suite and database built to take advantage a novel annotation program, annoBTD. Using annoBTD, Verdant provides accurate annotation of chloroplast genomes without manual intervention. Subsequent alignment and tree estimation can incorporate newly annotated and publically available plastomes and can accommodate a large number of taxa. Verdant sharply reduces the time required for analysis of assembled chloroplast genomes and removes the need for pipelines and software on personal hardware. Verdant is available at: http://verdant.iplantcollaborative.org/plastidDB/ It is implemented in PHP, Perl, MySQL, Javascript, HTML and CSS with all major browsers supported. mrmckain@gmail.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
High-performance etching of multilevel phase-type Fresnel zone plates with large apertures
NASA Astrophysics Data System (ADS)
Guo, Chengli; Zhang, Zhiyu; Xue, Donglin; Li, Longxiang; Wang, Ruoqiu; Zhou, Xiaoguang; Zhang, Feng; Zhang, Xuejun
2018-01-01
To ensure the etching depth uniformity of large-aperture Fresnel zone plates (FZPs) with controllable depths, a combination of a point source ion beam with a dwell-time algorithm has been proposed. According to the obtained distribution of the removal function, the latter can be used to optimize the etching time matrix by minimizing the root-mean-square error between the simulation results and the design value. Owing to the convolution operation in the utilized algorithm, the etching depth error is insensitive to the etching rate fluctuations of the ion beam, thereby reducing the requirement for the etching stability of the ion system. As a result, a 4-level FZP with a circular aperture of 300 mm was fabricated. The obtained results showed that the etching depth uniformity of the full aperture could be reduced to below 1%, which was sufficiently accurate for meeting the use requirements of FZPs. The proposed etching method may serve as an alternative way of etching high-precision diffractive optical elements with large apertures.
von Renteln, Daniel; Schmidt, Arthur; Riecken, Bettina; Caca, Karel
2010-05-01
Endoscopic full-thickness plication allows transmural suturing at the gastroesophageal junction to recreate the antireflux barrier. Multichannel intraluminal impedance monitoring (MII) can be used to detect nonacid or weakly acidic reflux, acidic swallows, and esophageal clearance time. This study used MII to evaluate the outcome of endoscopic full-thickness plication. In this study, 12 subsequent patients requiring maintenance proton pump inhibitor therapy underwent endoscopic full-thickness plication for treatment of gastroesophageal reflux disease. With patients off medication, MII was performed before and 6-months after endoscopic full-thickness plication. The total median number of reflux episodes was significantly reduced from 105 to 64 (p = 0.016). The median number of acid reflux episodes decreased from 73 to 43 (p = 0.016). Nonacid reflux episodes decreased from 23 to 21 (p = 0.306). The median bolus clearance time was 12 s before treatment and 11 s at 6 months (p = 0.798). The median acid exposure time was reduced from 6.8% to 3.4% (p = 0.008), and the DeMeester scores were reduced from 19 to 12 (p = 0.008). Endoscopic full-thickness plication significantly reduced total reflux episodes, acid reflux episodes, and total reflux exposure time. The DeMeester scores and total acid exposure time for the distal esophagus were significantly improved. No significant changes in nonacid reflux episodes and median bolus clearance time were encountered.
A variational eigenvalue solver on a photonic quantum processor
Peruzzo, Alberto; McClean, Jarrod; Shadbolt, Peter; Yung, Man-Hong; Zhou, Xiao-Qi; Love, Peter J.; Aspuru-Guzik, Alán; O’Brien, Jeremy L.
2014-01-01
Quantum computers promise to efficiently solve important problems that are intractable on a conventional computer. For quantum systems, where the physical dimension grows exponentially, finding the eigenvalues of certain operators is one such intractable problem and remains a fundamental challenge. The quantum phase estimation algorithm efficiently finds the eigenvalue of a given eigenvector but requires fully coherent evolution. Here we present an alternative approach that greatly reduces the requirements for coherent evolution and combine this method with a new approach to state preparation based on ansätze and classical optimization. We implement the algorithm by combining a highly reconfigurable photonic quantum processor with a conventional computer. We experimentally demonstrate the feasibility of this approach with an example from quantum chemistry—calculating the ground-state molecular energy for He–H+. The proposed approach drastically reduces the coherence time requirements, enhancing the potential of quantum resources available today and in the near future. PMID:25055053
Adaptive management of rangeland systems
Allen, Craig R.; Angeler, David G.; Fontaine, Joseph J.; Garmestani, Ahjond S.; Hart, Noelle M.; Pope, Kevin L.; Twidwell, Dirac
2017-01-01
Adaptive management is an approach to natural resource management that uses structured learning to reduce uncertainties for the improvement of management over time. The origins of adaptive management are linked to ideas of resilience theory and complex systems. Rangeland management is particularly well suited for the application of adaptive management, having sufficient controllability and reducible uncertainties. Adaptive management applies the tools of structured decision making and requires monitoring, evaluation, and adjustment of management. Adaptive governance, involving sharing of power and knowledge among relevant stakeholders, is often required to address conflict situations. Natural resource laws and regulations can present a barrier to adaptive management when requirements for legal certainty are met with environmental uncertainty. However, adaptive management is possible, as illustrated by two cases presented in this chapter. Despite challenges and limitations, when applied appropriately adaptive management leads to improved management through structured learning, and rangeland management is an area in which adaptive management shows promise and should be further explored.
Nassoiy, Sean P; Babu, Favin S; LaPorte, Heather M; Byron, Kenneth L; Majetschak, Matthias
2018-04-27
Recently, we demonstrated that Kv7 voltage-activated potassium channel inhibitors reduce fluid resuscitation requirements in short-term rat models of haemorrhagic shock. The aim of the present study was to further delineate the therapeutic potential and side effect profile of the Kv7 channel blocker linopirdine in various rat models of severe haemorrhagic shock over clinically relevant time periods. Intravenous administration of linopirdine, either before (1 or 3 mg/kg) or after (3 mg/kg) a 40% blood volume haemorrhage, did not affect blood pressure and survival in lethal haemorrhage models without fluid resuscitation. A single bolus of linopirdine (3 mg/kg) at the beginning of fluid resuscitation after haemorrhagic shock transiently reduced early fluid requirements in spontaneously breathing animals that were resuscitated for 3.5 hours. When mechanically ventilated rats were resuscitated after haemorrhagic shock with normal saline (NS) or with linopirdine-supplemented (10, 25 or 50 μg/mL) NS for 4.5 hours, linopirdine significantly and dose-dependently reduced fluid requirements by 14%, 45% and 55%, respectively. Lung and colon wet/dry weight ratios were reduced with linopirdine (25/50 μg/mL). There was no evidence for toxicity or adverse effects based on measurements of routine laboratory parameters and inflammation markers in plasma and tissue homogenates. Our findings support the concept that linopirdine-supplementation of resuscitation fluids is a safe and effective approach to reduce fluid requirements and tissue oedema formation during resuscitation from haemorrhagic shock. © 2018 John Wiley & Sons Australia, Ltd.
Design Change Model for Effective Scheduling Change Propagation Paths
NASA Astrophysics Data System (ADS)
Zhang, Hai-Zhu; Ding, Guo-Fu; Li, Rong; Qin, Sheng-Feng; Yan, Kai-Yin
2017-09-01
Changes in requirements may result in the increasing of product development project cost and lead time, therefore, it is important to understand how requirement changes propagate in the design of complex product systems and be able to select best options to guide design. Currently, a most approach for design change is lack of take the multi-disciplinary coupling relationships and the number of parameters into account integrally. A new design change model is presented to systematically analyze and search change propagation paths. Firstly, a PDS-Behavior-Structure-based design change model is established to describe requirement changes causing the design change propagation in behavior and structure domains. Secondly, a multi-disciplinary oriented behavior matrix is utilized to support change propagation analysis of complex product systems, and the interaction relationships of the matrix elements are used to obtain an initial set of change paths. Finally, a rough set-based propagation space reducing tool is developed to assist in narrowing change propagation paths by computing the importance of the design change parameters. The proposed new design change model and its associated tools have been demonstrated by the scheduling change propagation paths of high speed train's bogie to show its feasibility and effectiveness. This model is not only supportive to response quickly to diversified market requirements, but also helpful to satisfy customer requirements and reduce product development lead time. The proposed new design change model can be applied in a wide range of engineering systems design with improved efficiency.
2018-03-22
generators by not running them as often and reducing wet-stacking. Force Projection: If the IPDs of the microgrid replace, but don’t add to, the number...decrease generator run time, reduce fuel consumption, enable silent operation, and provide power redundancy for military applications. Important...it requires some failsafe features – run out of water, drive out of the sun. o Integration was a challenge; series of valves to run this experiment
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
Reduced kernel recursive least squares algorithm for aero-engine degradation prediction
NASA Astrophysics Data System (ADS)
Zhou, Haowen; Huang, Jinquan; Lu, Feng
2017-10-01
Kernel adaptive filters (KAFs) generate a linear growing radial basis function (RBF) network with the number of training samples, thereby lacking sparseness. To deal with this drawback, traditional sparsification techniques select a subset of original training data based on a certain criterion to train the network and discard the redundant data directly. Although these methods curb the growth of the network effectively, it should be noted that information conveyed by these redundant samples is omitted, which may lead to accuracy degradation. In this paper, we present a novel online sparsification method which requires much less training time without sacrificing the accuracy performance. Specifically, a reduced kernel recursive least squares (RKRLS) algorithm is developed based on the reduced technique and the linear independency. Unlike conventional methods, our novel methodology employs these redundant data to update the coefficients of the existing network. Due to the effective utilization of the redundant data, the novel algorithm achieves a better accuracy performance, although the network size is significantly reduced. Experiments on time series prediction and online regression demonstrate that RKRLS algorithm requires much less computational consumption and maintains the satisfactory accuracy performance. Finally, we propose an enhanced multi-sensor prognostic model based on RKRLS and Hidden Markov Model (HMM) for remaining useful life (RUL) estimation. A case study in a turbofan degradation dataset is performed to evaluate the performance of the novel prognostic approach.
NASA Astrophysics Data System (ADS)
Cerchiari, G.; Croccolo, F.; Cardinaux, F.; Scheffold, F.
2012-10-01
We present an implementation of the analysis of dynamic near field scattering (NFS) data using a graphics processing unit. We introduce an optimized data management scheme thereby limiting the number of operations required. Overall, we reduce the processing time from hours to minutes, for typical experimental conditions. Previously the limiting step in such experiments, the processing time is now comparable to the data acquisition time. Our approach is applicable to various dynamic NFS methods, including shadowgraph, Schlieren and differential dynamic microscopy.
Reducing the computational footprint for real-time BCPNN learning
Vogginger, Bernhard; Schüffny, René; Lansner, Anders; Cederström, Love; Partzsch, Johannes; Höppner, Sebastian
2015-01-01
The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware. PMID:25657618
Reducing the computational footprint for real-time BCPNN learning.
Vogginger, Bernhard; Schüffny, René; Lansner, Anders; Cederström, Love; Partzsch, Johannes; Höppner, Sebastian
2015-01-01
The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware.
Speeding up nuclear magnetic resonance spectroscopy by the use of SMAll Recovery Times - SMART NMR
NASA Astrophysics Data System (ADS)
Vitorge, Bruno; Bodenhausen, Geoffrey; Pelupessy, Philippe
2010-11-01
A drastic reduction of the time required for two-dimensional NMR experiments can be achieved by reducing or skipping the recovery delay between successive experiments. Novel SMAll Recovery Times (SMART) methods use orthogonal pulsed field gradients in three spatial directions to select the desired pathways and suppress interference effects. Two-dimensional spectra of dilute amino acids with concentrations as low as 2 mM can be recorded in about 0.1 s per increment in the indirect domain.
Accelerating the discovery of space-time patterns of infectious diseases using parallel computing.
Hohl, Alexander; Delmelle, Eric; Tang, Wenwu; Casas, Irene
2016-11-01
Infectious diseases have complex transmission cycles, and effective public health responses require the ability to monitor outbreaks in a timely manner. Space-time statistics facilitate the discovery of disease dynamics including rate of spread and seasonal cyclic patterns, but are computationally demanding, especially for datasets of increasing size, diversity and availability. High-performance computing reduces the effort required to identify these patterns, however heterogeneity in the data must be accounted for. We develop an adaptive space-time domain decomposition approach for parallel computation of the space-time kernel density. We apply our methodology to individual reported dengue cases from 2010 to 2011 in the city of Cali, Colombia. The parallel implementation reaches significant speedup compared to sequential counterparts. Density values are visualized in an interactive 3D environment, which facilitates the identification and communication of uneven space-time distribution of disease events. Our framework has the potential to enhance the timely monitoring of infectious diseases. Copyright © 2016 Elsevier Ltd. All rights reserved.
Flaxman, Abraham D; Stewart, Andrea; Joseph, Jonathan C; Alam, Nurul; Alam, Sayed Saidul; Chowdhury, Hafizur; Mooney, Meghan D; Rampatige, Rasika; Remolador, Hazel; Sanvictores, Diozele; Serina, Peter T; Streatfield, Peter Kim; Tallo, Veronica; Murray, Christopher J L; Hernandez, Bernardo; Lopez, Alan D; Riley, Ian Douglas
2018-02-01
There is increasing interest in using verbal autopsy to produce nationally representative population-level estimates of causes of death. However, the burden of processing a large quantity of surveys collected with paper and pencil has been a barrier to scaling up verbal autopsy surveillance. Direct electronic data capture has been used in other large-scale surveys and can be used in verbal autopsy as well, to reduce time and cost of going from collected data to actionable information. We collected verbal autopsy interviews using paper and pencil and using electronic tablets at two sites, and measured the cost and time required to process the surveys for analysis. From these cost and time data, we extrapolated costs associated with conducting large-scale surveillance with verbal autopsy. We found that the median time between data collection and data entry for surveys collected on paper and pencil was approximately 3 months. For surveys collected on electronic tablets, this was less than 2 days. For small-scale surveys, we found that the upfront costs of purchasing electronic tablets was the primary cost and resulted in a higher total cost. For large-scale surveys, the costs associated with data entry exceeded the cost of the tablets, so electronic data capture provides both a quicker and cheaper method of data collection. As countries increase verbal autopsy surveillance, it is important to consider the best way to design sustainable systems for data collection. Electronic data capture has the potential to greatly reduce the time and costs associated with data collection. For long-term, large-scale surveillance required by national vital statistical systems, electronic data capture reduces costs and allows data to be available sooner.
Keena, M A
2016-02-01
Mode of inheritance of hatch traits in Lymantria dispar L. was determined by crossing populations nearly fixed for the phenotypic extremes. The nondiapausing phenotype was inherited via a single recessive gene and the phenotype with reduced low temperature exposure requirements before hatch was inherited via a single dominant gene. There was no evidence for sex-linkage or cytoplasmic effects with either gene. Eggs from 43 geographic populations were evaluated for hatch characteristics after being held for 60 d at 5°C followed by incubation at 25°C. There was considerable variation both within and among the populations in the proportion able to hatch, time to first hatch, and average time to hatch. Egg masses with reduced requirement for low temperatures before the eggs were ready to hatch were present in all subspecies of L. dispar and the phenotype was not fixed in most populations. The populations clustered into three distinct groups, and climatic variables were found to be rough predictors of those groups. Variation in hatch phenotypes between populations is likely an adaptation to local climate and within a population provides a bet-hedging strategy to ensure that at least some hatch synchronizes with host leaf-out. Continued vigilance to prevent movement of populations both within and between countries is warranted, because some of the alleles that confer nondiapause or reduced low temperature requirements before egg hatch are not present in all populations and their introduction would increase variation in egg hatch within a population. Published by Oxford University Press on behalf of Entomological Society of America 2015. This work is written by a US Government employee and is in the public domain in the US.
High-resolution low-dose scanning transmission electron microscopy.
Buban, James P; Ramasse, Quentin; Gipson, Bryant; Browning, Nigel D; Stahlberg, Henning
2010-01-01
During the past two decades instrumentation in scanning transmission electron microscopy (STEM) has pushed toward higher intensity electron probes to increase the signal-to-noise ratio of recorded images. While this is suitable for robust specimens, biological specimens require a much reduced electron dose for high-resolution imaging. We describe here protocols for low-dose STEM image recording with a conventional field-emission gun STEM, while maintaining the high-resolution capability of the instrument. Our findings show that a combination of reduced pixel dwell time and reduced gun current can achieve radiation doses comparable to low-dose TEM.
Acceleration of discrete stochastic biochemical simulation using GPGPU.
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.
Acceleration of discrete stochastic biochemical simulation using GPGPU
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936
Winiecki, A.L.; Kroop, D.C.; McGee, M.K.; Lenkszus, F.R.
1984-01-01
An analytical instrument and particularly a time-of-flight-mass spectrometer for processing a large number of analog signals irregularly spaced over a spectrum, with programmable masking of portions of the spectrum where signals are unlikely in order to reduce memory requirements and/or with a signal capturing assembly having a plurality of signal capturing devices fewer in number than the analog signals for use in repeated cycles within the data processing time period.
Winiecki, Alan L.; Kroop, David C.; McGee, Marilyn K.; Lenkszus, Frank R.
1986-01-01
An analytical instrument and particularly a time-of-flight-mass spectrometer for processing a large number of analog signals irregularly spaced over a spectrum, with programmable masking of portions of the spectrum where signals are unlikely in order to reduce memory requirements and/or with a signal capturing assembly having a plurality of signal capturing devices fewer in number than the analog signals for use in repeated cycles within the data processing time period.
Todd E. Ristau; Susan L. Stout
2014-01-01
Assessment of regeneration can be time-consuming and costly. Often, foresters look for ways to minimize the cost of doing inventories. One potential method to reduce time required on a plot is use of percent cover data rather than seedling count data to determine stocking. Robust linear regression analysis was used in this report to predict seedling count data from...
Optimal control of singularly perturbed nonlinear systems with state-variable inequality constraints
NASA Technical Reports Server (NTRS)
Calise, A. J.; Corban, J. E.
1990-01-01
The established necessary conditions for optimality in nonlinear control problems that involve state-variable inequality constraints are applied to a class of singularly perturbed systems. The distinguishing feature of this class of two-time-scale systems is a transformation of the state-variable inequality constraint, present in the full order problem, to a constraint involving states and controls in the reduced problem. It is shown that, when a state constraint is active in the reduced problem, the boundary layer problem can be of finite time in the stretched time variable. Thus, the usual requirement for asymptotic stability of the boundary layer system is not applicable, and cannot be used to construct approximate boundary layer solutions. Several alternative solution methods are explored and illustrated with simple examples.
Maglev guideway route alignment and right-of-way requirements
NASA Astrophysics Data System (ADS)
Carlton, S.; Andriola, T.
1992-12-01
The use of existing rights-of-way (ROW) is assessed for maglev systems by estimating trip times and land acquisition requirements for potential maglev corridors while meeting passenger comfort limits. Right-of-way excursions improve trip time but incur a cost for purchasing land. The final report documents findings of the eight tasks in establishing right-of-way feasibility by examining three city-pair corridors in detail and developing an approximation method for estimating route length and travel times in 20 additional city-pair corridor portions and 21 new corridors. The use of routes independent of existing railroad or highway right-of-way have trip time advantages and significantly reduce the need for aggressive guideway geometries on intercity corridors. Selection of the appropriate alignment is determined by many corridor specific issues. Use of existing intercity rights-of-way may be appropriate for parts of routes on a corridor-specific basis and for urban penetration where vehicle speeds are likely to be reduced by policy due to noise and safety considerations, and where land acquisition costs are high. Detailed aspects of available rights-of-way, land acquisition costs, geotechnical issues, land use, and population centers must be examined in more detail on a specific corridor basis before the proper or best maglev alignment can be chosen.
NASA Astrophysics Data System (ADS)
Lin, Qingyang; Andrew, Matthew; Thompson, William; Blunt, Martin J.; Bijeljic, Branko
2018-05-01
Non-invasive laboratory-based X-ray microtomography has been widely applied in many industrial and research disciplines. However, the main barrier to the use of laboratory systems compared to a synchrotron beamline is its much longer image acquisition time (hours per scan compared to seconds to minutes at a synchrotron), which results in limited application for dynamic in situ processes. Therefore, the majority of existing laboratory X-ray microtomography is limited to static imaging; relatively fast imaging (tens of minutes per scan) can only be achieved by sacrificing imaging quality, e.g. reducing exposure time or number of projections. To alleviate this barrier, we introduce an optimized implementation of a well-known iterative reconstruction algorithm that allows users to reconstruct tomographic images with reasonable image quality, but requires lower X-ray signal counts and fewer projections than conventional methods. Quantitative analysis and comparison between the iterative and the conventional filtered back-projection reconstruction algorithm was performed using a sandstone rock sample with and without liquid phases in the pore space. Overall, by implementing the iterative reconstruction algorithm, the required image acquisition time for samples such as this, with sparse object structure, can be reduced by a factor of up to 4 without measurable loss of sharpness or signal to noise ratio.
Papachristos, Alexander; Edwards, Elton; Dowrick, Adam; Gosling, Cameron
2014-09-01
Despite a number of injury prevention campaigns and interventions, horse riding continues to be a dangerous activity, resulting in more accidents per hour than motorcycling, skiing and football. Injuries are often serious, with one in four patients requiring admission to hospital. This study aims to describe the severity of equestrian-related injuries (ERIs) using both clinical parameters and patient-reported outcomes. A retrospective study of all patients aged ≥18 years admitted to The Alfred Hospital between January 2003 and January 2008 with an ERI was performed. Specific clinical data were extracted from the medical record. In addition, a questionnaire was conducted identifying the details of the accident, the required recovery time and levels of ongoing pain and physical disability. During the study period 172 patients met the inclusion criteria. There were three deaths (2%). Eighty-two patients (48%) suffered head injuries. Forty-one patients (24%) were admitted to the ICU and 31 patients (18%) required mechanical ventilation. On discharge, 41 patients (24%) required transfer to a sub-acute rehabilitation facility. One-hundred-and-twenty-four patients (72%) completed the questionnaire. Thirty-nine respondents (31%) were not wearing a helmet. Among patients injured for more than 6 months, 38 (35%) still experienced moderate or severe pain or disability. Ninety-five patients had returned to work at the time of review, among which 47(50%) required longer than 6 months to recover, and 40 (42%) returned at a reduced capacity. The clinical and patient-reported outcomes of ERIs requiring hospital admission are poor. Persistent pain and disability are common, even up to 5 years post-injury. A large proportion of patients required longer than 6 months to return to work and many return at a reduced capacity. Copyright © 2014 Elsevier Ltd. All rights reserved.
Floating-to-Fixed-Point Conversion for Digital Signal Processors
NASA Astrophysics Data System (ADS)
Menard, Daniel; Chillet, Daniel; Sentieys, Olivier
2006-12-01
Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.
Does the use of automated fetal biometry improve clinical work flow efficiency?
Espinoza, Jimmy; Good, Sara; Russell, Evie; Lee, Wesley
2013-05-01
This study was designed to compare the work flow efficiency of manual measurements of 5 fetal parameters with a novel technique that automatically measures these parameters from 2-dimensional sonograms. This prospective study included 200 singleton pregnancies between 15 and 40 weeks' gestation. Patients were randomly allocated to either manual (n = 100) or automatic (n = 100) fetal biometry. The automatic measurement was performed using a commercially available software application. A digital video recorder captured all on-screen activity associated with the sonographic examination. The examination time and number of steps required to obtain fetal measurements were compared between manual and automatic methods. The mean time required to obtain the biometric measurements was significantly shorter using the automated technique than the manual approach (P < .001 for all comparisons). Similarly, the mean number of steps required to perform these measurements was significantly fewer with automatic measurements compared to the manual technique (P < .001). In summary, automated biometry reduced the examination time required for standard fetal measurements. This approach may improve work flow efficiency in busy obstetric sonography practices.
2014-03-10
This document contains final regulations providing guidance toemployers that are subject to the information reporting requirements under section 6056 of the Internal Revenue Code (Code), enacted by the Affordable Care Act (generally employers with at least 50 full-time employees, including full-time equivalent employees). Section 6056 requires those employers to report to the IRS information about the health care coverage, if any, they offered to full-time employees, in order to administer the employer shared responsibility provisions of section 4980H of the Code. Section 6056 also requires those employers to furnish related statements to employees that employees may use to determine whether, for each month of the calendar year, they may claim on their individual tax returns a premium tax credit under section 36B (premium tax credit). The regulations provide for a general reporting method and alternative reporting methods designed to simplify and reduce the cost of reporting for employers subject to the information reporting requirements under section 6056. The regulations affect those employers, employees and other individuals.
Fitbit Activity Trackers Interrupt Workplace Sedentary Behavior: A New Application.
Guitar, N A; MacDougall, A; Connelly, D M; Knight, E
2018-05-01
This study investigated whether Fitbit devices can reduce sedentary behavior among employees in the workplace. Participants were asked to wear Fitbits during 8-hour work shifts, 5 days per week, for 8 weeks. They were instructed to stand at least once every 30 minutes throughout the workday. The goal of the study was to determine whether standing once every 30 minutes was a feasible strategy for reducing sedentary workplace behavior. On average, participants completed 36 of 40 workdays using Fitbits. The number of times participants stood during an 8-hour workday averaged 12 stands per day (maximum 16 stands per day). These results indicate that Fitbit technology is effective for recording and tracking interruptions in sitting time; however, to reduce sitting behavior, alternate approaches are required to motivate larger numbers of workers to participate.
Army Staff Automated Administrative Support System (ARSTADS) Report. Phase I. Volume II.
1980-07-01
requirements to transmit data with short fuse. This requirement varies from 1-6 times daily throughout the agency. Media used for transmission varies from...material automatically onto magnetic media . (1) Advantages. (a) Eliminates need for second or more typings of material. (b) Can be extremely cost...reduced and other methods of storage media will be possible. VI-1 LOmni (App 6 Contd) B. ZXJXM: Offices are over crowded with record storage containers
Li, Zhongjie; Xia, Yingfeng; Chen, Kai; Zhao, Hanchi; Liu, Yang
Prosthodontic oral rehabilitation procedures are time consuming and require efforts to maintain the confirmed maxillomandibular relationship. Several occlusal registrations and impressions are needed, and cross-mounting is performed to transfer the diagnostic wax-up to master working casts. The introduction of a digital workflow protocol reduces steps in the required process, and occlusal registrations with less deformation are used. The outcome is a maintained maxillomandibular position that is accurately and conveniently transferred.
78 FR 40823 - Reports, Forms, and Record Keeping Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-08
... at time of approval. Title: National Survey of Principal Drivers of Vehicles with a Rear Seat Belt... from both groups and information on their passengers seat belt usage habits, as well as the... use computer-assisted telephone interviewing to reduce interview length and minimize recording errors...
Satellite Systems for Instructional Radio.
ERIC Educational Resources Information Center
Jamison, Dean; And Others
Recent studies suggest that Educational Television (ETV) broadcast from geostationary satellites can markedly reduce the cost and time required to provide educational opportunity for the citizens of large, less-developed countries. The sheer volume of educational needs precludes, however, the possibility of satisfying very many of them with only a…
NASA Technical Reports Server (NTRS)
Mcgowan, David M.; Bostic, Susan W.; Camarda, Charles J.
1993-01-01
The development of two advanced reduced-basis methods, the force derivative method and the Lanczos method, and two widely used modal methods, the mode displacement method and the mode acceleration method, for transient structural analysis of unconstrained structures is presented. Two example structural problems are studied: an undamped, unconstrained beam subject to a uniformly distributed load which varies as a sinusoidal function of time and an undamped high-speed civil transport aircraft subject to a normal wing tip load which varies as a sinusoidal function of time. These example problems are used to verify the methods and to compare the relative effectiveness of each of the four reduced-basis methods for performing transient structural analyses on unconstrained structures. The methods are verified with a solution obtained by integrating directly the full system of equations of motion, and they are compared using the number of basis vectors required to obtain a desired level of accuracy and the associated computational times as comparison criteria.
NASA Technical Reports Server (NTRS)
Brown, Nelson
2013-01-01
A peak-seeking control algorithm for real-time trim optimization for reduced fuel consumption has been developed by researchers at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center to address the goals of the NASA Environmentally Responsible Aviation project to reduce fuel burn and emissions. The peak-seeking control algorithm is based on a steepest-descent algorithm using a time-varying Kalman filter to estimate the gradient of a performance function of fuel flow versus control surface positions. In real-time operation, deflections of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of an F/A-18 airplane are used for optimization of fuel flow. Results from six research flights are presented herein. The optimization algorithm found a trim configuration that required approximately 3 percent less fuel flow than the baseline trim at the same flight condition. This presentation also focuses on the design of the flight experiment and the practical challenges of conducting the experiment.
Video-guided calibration of an augmented reality mobile C-arm.
Chen, Xin; Naik, Hemal; Wang, Lejing; Navab, Nassir; Fallavollita, Pascal
2014-11-01
The augmented reality (AR) fluoroscope augments an X-ray image by video and provides the surgeon with a real-time in situ overlay of the anatomy. The overlay alignment is crucial for diagnostic and intra-operative guidance, so precise calibration of the AR fluoroscope is required. The first and most complex step of the calibration procedure is the determination of the X-ray source position. Currently, this is achieved using a biplane phantom with movable metallic rings on its top layer and fixed X-ray opaque markers on its bottom layer. The metallic rings must be moved to positions where at least two pairs of rings and markers are isocentric in the X-ray image. The current "trial and error" calibration process currently requires acquisition of many X-ray images, a task that is both time consuming and radiation intensive. An improved process was developed and tested for C-arm calibration. Video guidance was used to drive the calibration procedure to minimize both X-ray exposure and the time involved. For this, a homography between X-ray and video images is estimated. This homography is valid for the plane at which the metallic rings are positioned and is employed to guide the calibration procedure. Eight users having varying calibration experience (i.e., 2 experts, 2 semi-experts, 4 novices) were asked to participate in the evaluation. The video-guided technique reduced the number of intra-operative X-ray calibration images by 89% and decreased the total time required by 59%. A video-based C-arm calibration method has been developed that improves the usability of the AR fluoroscope with a friendlier interface, reduced calibration time and clinically acceptable radiation doses.
The High Stability Engine Control (HISTEC) Program: Flight Demonstration Phase
NASA Technical Reports Server (NTRS)
DeLaat, John C.; Southwick, Robert D.; Gallops, George W.; Orme, John S.
1998-01-01
Future aircraft turbine engines, both commercial and military, must be able to accommodate expected increased levels of steady-state and dynamic engine-face distortion. The current approach of incorporating sufficient design stall margin to tolerate these increased levels of distortion would significantly reduce performance. The objective of the High Stability Engine Control (HISTEC) program is to design, develop, and flight-demonstrate an advanced, integrated engine control system that uses measurement-based estimates of distortion to enhance engine stability. The resulting distortion tolerant control reduces the required design stall margin, with a corresponding increase in performance and decrease in fuel burn. The HISTEC concept has been developed and was successfully flight demonstrated on the F-15 ACTIVE aircraft during the summer of 1997. The flight demonstration was planned and carried out in two phases, the first to show distortion estimation, and the second to show distortion accommodation. Post-flight analysis shows that the HISTEC technologies are able to successfully estimate and accommodate distortion, transiently setting the stall margin requirement on-line and in real-time. This allows the design stall margin requirement to be reduced, which in turn can be traded for significantly increased performance and/or decreased weight. Flight demonstration of the HISTEC technologies has significantly reduced the risk of transitioning the technology to tactical and commercial engines.
The DaveMLTranslator: An Interface for DAVE-ML Aerodynamic Models
NASA Technical Reports Server (NTRS)
Hill, Melissa A.; Jackson, E. Bruce
2007-01-01
It can take weeks or months to incorporate a new aerodynamic model into a vehicle simulation and validate the performance of the model. The Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML) has been proposed as a means to reduce the time required to accomplish this task by defining a standard format for typical components of a flight dynamic model. The purpose of this paper is to describe an object-oriented C++ implementation of a class that interfaces a vehicle subsystem model specified in DAVE-ML and a vehicle simulation. Using the DaveMLTranslator class, aerodynamic or other subsystem models can be automatically imported and verified at run-time, significantly reducing the elapsed time between receipt of a DAVE-ML model and its integration into a simulation environment. The translator performs variable initializations, data table lookups, and mathematical calculations for the aerodynamic build-up, and executes any embedded static check-cases for verification. The implementation is efficient, enabling real-time execution. Simple interface code for the model inputs and outputs is the only requirement to integrate the DaveMLTranslator as a vehicle aerodynamic model. The translator makes use of existing table-lookup utilities from the Langley Standard Real-Time Simulation in C++ (LaSRS++). The design and operation of the translator class is described and comparisons with existing, conventional, C++ aerodynamic models of the same vehicle are given.
Resource utilization model for the algorithm to architecture mapping model
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Patel, Rakesh R.
1993-01-01
The analytical model for resource utilization and the variable node time and conditional node model for the enhanced ATAMM model for a real-time data flow architecture are presented in this research. The Algorithm To Architecture Mapping Model, ATAMM, is a Petri net based graph theoretic model developed at Old Dominion University, and is capable of modeling the execution of large-grained algorithms on a real-time data flow architecture. Using the resource utilization model, the resource envelope may be obtained directly from a given graph and, consequently, the maximum number of required resources may be evaluated. The node timing diagram for one iteration period may be obtained using the analytical resource envelope. The variable node time model, which describes the change in resource requirement for the execution of an algorithm under node time variation, is useful to expand the applicability of the ATAMM model to heterogeneous architectures. The model also describes a method of detecting the presence of resource limited mode and its subsequent prevention. Graphs with conditional nodes are shown to be reduced to equivalent graphs with time varying nodes and, subsequently, may be analyzed using the variable node time model to determine resource requirements. Case studies are performed on three graphs for the illustration of applicability of the analytical theories.
Reduced-rank technique for joint channel estimation in TD-SCDMA systems
NASA Astrophysics Data System (ADS)
Kamil Marzook, Ali; Ismail, Alyani; Mohd Ali, Borhanuddin; Sali, Adawati; Khatun, Sabira
2013-02-01
In time division-synchronous code division multiple access systems, increasing the system capacity by exploiting the inserting of the largest number of users in one time slot (TS) requires adding more estimation processes to estimate the joint channel matrix for the whole system. The increase in the number of channel parameters due the increase in the number of users in one TS directly affects the precision of the estimator's performance. This article presents a novel channel estimation with low complexity, which relies on reducing the rank order of the total channel matrix H. The proposed method exploits the rank deficiency of H to reduce the number of parameters that characterise this matrix. The adopted reduced-rank technique is based on truncated singular value decomposition algorithm. The algorithms for reduced-rank joint channel estimation (JCE) are derived and compared against traditional full-rank JCEs: least squares (LS) or Steiner and enhanced (LS or MMSE) algorithms. Simulation results of the normalised mean square error showed the superiority of reduced-rank estimators. In addition, the channel impulse responses founded by reduced-rank estimator for all active users offers considerable performance improvement over the conventional estimator along the channel window length.
A Single Center Study of 1,179 Heart Transplant Patients-Factors Affecting Pacemaker Implantation.
Wellmann, Petra; Herrmann, Florian Ernst Martin; Hagl, Christian; Juchem, Gerd
2017-03-01
After around 10% of heart transplant patients require pacemaker implantation. The bradyarrhythmias causing pacemaker requirement include sinus node dysfunction (SND) and atrioventricular block (AVB). This study sought to define clinical predictors for pacemaker requirement as well as identify differences in the patient groups developing SND and AVB. Our operative database was used to collect retrospective recipient, donor, and operative data of all patients receiving orthotopic heart transplants between 1981 and 2016. In the 35-year period 1,179 transplants were performed (mean recipient age 45.5 ± 0.5 years, 20.4% female, 90.6% biatrial technique) with bradyarrhythmias requiring pacemaker implantation developing in 135 patients (11.5%). Independent risk factors were prolonged operative time 340 minutes versus 313 minutes (P = 0.027) and a biatrial anastomosis (P = 0.036). Ischemia time, cardiopulmonary bypass time, aortic cross clamp time, and reperfusion time all had no significant effect on pacemaker implantation rates. Similarly, whether the transplant was a reoperation, a retransplant, or performed after primary assist implantation had no effects on pacemaker implantation rates. There was no survival difference between the paced and nonpaced groups. The donor age was higher in the patients who developed AVB as the indication for pacemaker implantation (43 vs 34 years, P = 0.031). Patients with AVB had longer aortic cross clamp times and developed the arrhythmia later than those who developed SND. Use of the bicaval instead of the biatrial technique and shortened operative times should reduce pacemaker requirement after heart transplantation. Survival is not affected by this complication. © 2017 Wiley Periodicals, Inc.
Bronstone, Amy; Graham, Claudia
2016-01-01
Background: Severe hypoglycemia remains a major barrier to optimal diabetes management and places a high burden on the US health care system due to the high costs of hypoglycemia-related emergency visits and hospitalizations. Patients with type 1 diabetes (T1DM) who have hypoglycemia unawareness are at a particularly high risk for severe hypoglycemia, the incidence of which may be reduced by the use of real-time continuous glucose monitoring (RT-CGM). Methods: We performed a cost calculation using values of key parameters derived from various published sources to examine the potential cost implications of standalone RT-CGM as a tool for reducing rates of severe hypoglycemia requiring hospitalization in adult patients with T1DM who have hypoglycemia unawareness. Results: In a hypothetical commercial health plan with 10 million members aged 18-64 years, 9.3% (930 000) are expected to have diagnosed diabetes, with approximately 5% (46 500) having T1DM, of whom approximately 20% (9300) have hypoglycemia unawareness. RT-CGM was estimated to reduce the cost of annual hypoglycemia-related hospitalizations in this select population by $54 369 000, yielding an estimated net cost savings of $8 799 000 to $12 519 000 and a savings of $946 to $1346 per patient. Conclusion: This article presents a cost calculation based on available data from multiple sources showing that RT-CGM has the potential to reduce short-term health care costs by averting severe hypoglycemic events requiring hospitalization in a select high-risk population. Prospective, randomized studies that are adequately powered and specifically enroll patients at high risk for severe hypoglycemia are needed to confirm that RT-CGM significantly reduces the incidence of these costly events. PMID:26880392
NASA Astrophysics Data System (ADS)
Suresh Babu, Arun Vishnu; Ramesh, Kiran; Gopalarathnam, Ashok
2017-11-01
In previous research, Ramesh et al. (JFM,2014) developed a low-order discrete vortex method for modeling unsteady airfoil flows with intermittent leading edge vortex (LEV) shedding using a leading edge suction parameter (LESP). LEV shedding is initiated using discrete vortices (DVs) whenever the Leading Edge Suction Parameter (LESP) exceeds a critical value. In subsequent research, the method was successfully employed by Ramesh et al. (JFS, 2015) to predict aeroelastic limit-cycle oscillations in airfoil flows dominated by intermittent LEV shedding. When applied to flows that require large number of time steps, the computational cost increases due to the increasing vortex count. In this research, we apply an amalgamation strategy to actively control the DV count, and thereby reduce simulation time. A pair each of LEVs and TEVs are amalgamated at every time step. The ideal pairs for amalgamation are identified based on the requirement that the flowfield in the vicinity of the airfoil is least affected (Spalart, 1988). Instead of placing the amalgamated vortex at the centroid, we place it at an optimal location to ensure that the leading-edge suction and the airfoil bound circulation are conserved. Results of the initial study are promising.
q-Space Deep Learning: Twelve-Fold Shorter and Model-Free Diffusion MRI Scans.
Golkov, Vladimir; Dosovitskiy, Alexey; Sperl, Jonathan I; Menzel, Marion I; Czisch, Michael; Samann, Philipp; Brox, Thomas; Cremers, Daniel
2016-05-01
Numerous scientific fields rely on elaborate but partly suboptimal data processing pipelines. An example is diffusion magnetic resonance imaging (diffusion MRI), a non-invasive microstructure assessment method with a prominent application in neuroimaging. Advanced diffusion models providing accurate microstructural characterization so far have required long acquisition times and thus have been inapplicable for children and adults who are uncooperative, uncomfortable, or unwell. We show that the long scan time requirements are mainly due to disadvantages of classical data processing. We demonstrate how deep learning, a group of algorithms based on recent advances in the field of artificial neural networks, can be applied to reduce diffusion MRI data processing to a single optimized step. This modification allows obtaining scalar measures from advanced models at twelve-fold reduced scan time and detecting abnormalities without using diffusion models. We set a new state of the art by estimating diffusion kurtosis measures from only 12 data points and neurite orientation dispersion and density measures from only 8 data points. This allows unprecedentedly fast and robust protocols facilitating clinical routine and demonstrates how classical data processing can be streamlined by means of deep learning.
Design for disassembly and sustainability assessment to support aircraft end-of-life treatment
NASA Astrophysics Data System (ADS)
Savaria, Christian
Gas turbine engine design is a multidisciplinary and iterative process. Many design iterations are necessary to address the challenges among the disciplines. In the creation of a new engine architecture, the design time is crucial in capturing new business opportunities. At the detail design phase, it was proven very difficult to correct an unsatisfactory design. To overcome this difficulty, the concept of Multi-Disciplinary Optimization (MDO) at the preliminary design phase (Preliminary MDO or PMDO) is used allowing more freedom to perform changes in the design. PMDO also reduces the design time at the preliminary design phase. The concept of PMDO was used was used to create parametric models, and new correlations for high pressure gas turbine housing and shroud segments towards a new design process. First, dedicated parametric models were created because of their reusability and versatility. Their ease of use compared to non-parameterized models allows more design iterations thus reduces set up and design time. Second, geometry correlations were created to minimize the number of parameters used in turbine housing and shroud segment design. Since the turbine housing and the shroud segment geometries are required in tip clearance analyses, care was taken as to not oversimplify the parametric formulation. In addition, a user interface was developed to interact with the parametric models and improve the design time. Third, the cooling flow predictions require many engine parameters (i.e. geometric and performance parameters and air properties) and a reference shroud segments. A second correlation study was conducted to minimize the number of engine parameters required in the cooling flow predictions and to facilitate the selection of a reference shroud segment. Finally, the parametric models, the geometry correlations, and the user interface resulted in a time saving of 50% and an increase in accuracy of 56% in the new design system compared to the existing design system. Also, regarding the cooling flow correlations, the number of engine parameters was reduced by a factor of 6 to create a simplified prediction model and hence a faster shroud segment selection process. None
An efficient pseudomedian filter for tiling microrrays.
Royce, Thomas E; Carriero, Nicholas J; Gerstein, Mark B
2007-06-07
Tiling microarrays are becoming an essential technology in the functional genomics toolbox. They have been applied to the tasks of novel transcript identification, elucidation of transcription factor binding sites, detection of methylated DNA and several other applications in several model organisms. These experiments are being conducted at increasingly finer resolutions as the microarray technology enjoys increasingly greater feature densities. The increased densities naturally lead to increased data analysis requirements. Specifically, the most widely employed algorithm for tiling array analysis involves smoothing observed signals by computing pseudomedians within sliding windows, a O(n2logn) calculation in each window. This poor time complexity is an issue for tiling array analysis and could prove to be a real bottleneck as tiling microarray experiments become grander in scope and finer in resolution. We therefore implemented Monahan's HLQEST algorithm that reduces the runtime complexity for computing the pseudomedian of n numbers to O(nlogn) from O(n2logn). For a representative tiling microarray dataset, this modification reduced the smoothing procedure's runtime by nearly 90%. We then leveraged the fact that elements within sliding windows remain largely unchanged in overlapping windows (as one slides across genomic space) to further reduce computation by an additional 43%. This was achieved by the application of skip lists to maintaining a sorted list of values from window to window. This sorted list could be maintained with simple O(log n) inserts and deletes. We illustrate the favorable scaling properties of our algorithms with both time complexity analysis and benchmarking on synthetic datasets. Tiling microarray analyses that rely upon a sliding window pseudomedian calculation can require many hours of computation. We have eased this requirement significantly by implementing efficient algorithms that scale well with genomic feature density. This result not only speeds the current standard analyses, but also makes possible ones where many iterations of the filter may be required, such as might be required in a bootstrap or parameter estimation setting. Source code and executables are available at http://tiling.gersteinlab.org/pseudomedian/.
An efficient pseudomedian filter for tiling microrrays
Royce, Thomas E; Carriero, Nicholas J; Gerstein, Mark B
2007-01-01
Background Tiling microarrays are becoming an essential technology in the functional genomics toolbox. They have been applied to the tasks of novel transcript identification, elucidation of transcription factor binding sites, detection of methylated DNA and several other applications in several model organisms. These experiments are being conducted at increasingly finer resolutions as the microarray technology enjoys increasingly greater feature densities. The increased densities naturally lead to increased data analysis requirements. Specifically, the most widely employed algorithm for tiling array analysis involves smoothing observed signals by computing pseudomedians within sliding windows, a O(n2logn) calculation in each window. This poor time complexity is an issue for tiling array analysis and could prove to be a real bottleneck as tiling microarray experiments become grander in scope and finer in resolution. Results We therefore implemented Monahan's HLQEST algorithm that reduces the runtime complexity for computing the pseudomedian of n numbers to O(nlogn) from O(n2logn). For a representative tiling microarray dataset, this modification reduced the smoothing procedure's runtime by nearly 90%. We then leveraged the fact that elements within sliding windows remain largely unchanged in overlapping windows (as one slides across genomic space) to further reduce computation by an additional 43%. This was achieved by the application of skip lists to maintaining a sorted list of values from window to window. This sorted list could be maintained with simple O(log n) inserts and deletes. We illustrate the favorable scaling properties of our algorithms with both time complexity analysis and benchmarking on synthetic datasets. Conclusion Tiling microarray analyses that rely upon a sliding window pseudomedian calculation can require many hours of computation. We have eased this requirement significantly by implementing efficient algorithms that scale well with genomic feature density. This result not only speeds the current standard analyses, but also makes possible ones where many iterations of the filter may be required, such as might be required in a bootstrap or parameter estimation setting. Source code and executables are available at . PMID:17555595
Teledermatology as a means to improve access to inpatient dermatology care.
Sharma, Priyank; Kovarik, Carrie L; Lipoff, Jules B
2016-07-01
Many hospitals have limited inpatient dermatology consultation access. Most dermatologists are outpatient-based and may find the distance and time to complete inpatient consultations prohibitive. Teledermatology may improve access to inpatient dermatology care by reducing barriers of distance and time. We conducted a prospective two-phase pilot study at two academic hospitals comparing time needed to complete inpatient consultations after resident dermatologists initially evaluated patients, called average handling time (AHT), and time needed to respond to the primary team, called time to response (TTR), with and without teledermatology with surveys to capture changes in dermatologist opinion on teledermatology. Teledermatology was only used in the study phase, and patients were seen in-person in both study phases. Teledermatology alone sufficiently answered consultations in 10 of 25 study consultations. The mean AHT in the study phase (sAHT) was 26.9 min compared to the baseline phase (bAHT) of 43.5 min, a 16.6 min reduction (p = 0.004). The 10 study cases where teledermatology alone was sufficient had mean study TTR (sTTR) of 273.3 min compared to a baseline TTR (bTTR) of 405.7 min, a 132.4 min reduction (p = 0.032). Teledermatology reduces the time required for an attending dermatologist to respond and the time required for a primary team to receive a response for an inpatient dermatology consultation in a subset of cases. These findings suggest teledermatology can be used as a tool to improve access to inpatient dermatology care. © The Author(s) 2015.
Evaluation of the North Island A/C Crash/Rescue Training Facility,
1981-08-01
unburned fuel, (2) Recommend procedures for improving the disposal or recovery of AFFF . 14 r !- 9 NAS N.I. Code 183 : Monitor the air pollution created...obtain the concentration (volume by volume) of AFFF . The foamability of 3M FC -780 and ANSUL AFFF after 1 to 500 dilutions of the AFFF concentrate is 85...discharge rates. However, the low volatility of JP5 reduced the burnback threat and the high efficiency of AFFF reduced the time and amount of agent required
The SCUBA-2 Data Reduction Cookbook
NASA Astrophysics Data System (ADS)
Thomas, Holly S.; Currie, Malcolm J.
This cookbook provides a short introduction to Starlink facilities, especially SMURF, the Sub-Millimetre User Reduction Facility, for reducing, displaying, and calibrating SCUBA-2 data. It describes some of the data artefacts present in SCUBA-2 time-series and methods to mitigate them. In particular, this cookbook illustrates the various steps required to reduce the data; and gives an overview of the Dynamic Iterative Map-Maker, which carries out all of these steps using a single command controlled by a configuration file. Specialised configuration files are presented.
The SCUBA-2 SRO data reduction cookbook
NASA Astrophysics Data System (ADS)
Chapin, Edward; Dempsey, Jessica; Jenness, Tim; Scott, Douglas; Thomas, Holly; Tilanus, Remo P. J.
This cookbook provides a short introduction to starlink\\ facilities, especially smurf, the Sub-Millimetre User Reduction Facility, for reducing and displaying SCUBA-2 SRO data. We describe some of the data artefacts present in SCUBA-2 time series and methods we employ to mitigate them. In particular, we illustrate the various steps required to reduce the data, and the Dynamic Iterative Map-Maker, which carries out all of these steps using a single command. For information on SCUBA-2 data reduction since SRO, please SC/21.
Human Exploration of Near-Earth Objects Accessibility Study
NASA Technical Reports Server (NTRS)
Abell, Paul; Drake, Bret; Friedensen, Victoria; Mazanek, Dan
2011-01-01
Key questions addressed: How short can the trip times be reduced in order to reduce crew exposure to the deep-space radiation and microgravity environment? Are there options to conduct easy, early missions?. What is the affect of infusion of advanced propulsion technologies on target availability When do the departure opportunities open up, how frequent and how long are they? How many launches are required to conduct a round trip human mission to a NEA? And, based on the above, how many Near-Earth Asteroids are available
Enzymatic approaches in paper industry for pulp refining and biofilm control.
Torres, C E; Negro, C; Fuente, E; Blanco, A
2012-10-01
The use of enzymes has a high potential in the pulp and paper industry to improve the economics of the paper production process and to achieve, at the same time, a reduced environmental impact. Specific enzymes contribute to reduce the amount of chemicals and energy required for the modification of fibers and helps to prevent the formation or development of biofilms. This review is aimed at presenting the latest progresses made in the application of enzymes as refining aids and biofilm control agents.
Evaluation of a transfinite element numerical solution method for nonlinear heat transfer problems
NASA Technical Reports Server (NTRS)
Cerro, J. A.; Scotti, S. J.
1991-01-01
Laplace transform techniques have been widely used to solve linear, transient field problems. A transform-based algorithm enables calculation of the response at selected times of interest without the need for stepping in time as required by conventional time integration schemes. The elimination of time stepping can substantially reduce computer time when transform techniques are implemented in a numerical finite element program. The coupling of transform techniques with spatial discretization techniques such as the finite element method has resulted in what are known as transfinite element methods. Recently attempts have been made to extend the transfinite element method to solve nonlinear, transient field problems. This paper examines the theoretical basis and numerical implementation of one such algorithm, applied to nonlinear heat transfer problems. The problem is linearized and solved by requiring a numerical iteration at selected times of interest. While shown to be acceptable for weakly nonlinear problems, this algorithm is ineffective as a general nonlinear solution method.
Optimization-based manufacturing scheduling with multiple resources and setup requirements
NASA Astrophysics Data System (ADS)
Chen, Dong; Luh, Peter B.; Thakur, Lakshman S.; Moreno, Jack, Jr.
1998-10-01
The increasing demand for on-time delivery and low price forces manufacturer to seek effective schedules to improve coordination of multiple resources and to reduce product internal costs associated with labor, setup and inventory. This study describes the design and implementation of a scheduling system for J. M. Product Inc. whose manufacturing is characterized by the need to simultaneously consider machines and operators while an operator may attend several operations at the same time, and the presence of machines requiring significant setup times. The scheduling problem with these characteristics are typical for many manufacturers, very difficult to be handled, and have not been adequately addressed in the literature. In this study, both machine and operators are modeled as resources with finite capacities to obtain efficient coordination between them, and an operator's time can be shared by several operations at the same time to make full use of the operator. Setups are explicitly modeled following our previous work, with additional penalties on excessive setups to reduce setup costs and avoid possible scraps. An integer formulation with a separable structure is developed to maximize on-time delivery of products, low inventory and small number of setups. Within the Lagrangian relaxation framework, the problem is decomposed into individual subproblems that are effectively solved by using dynamic programming with additional penalties embedded in state transitions. Heuristics is then developed to obtain a feasible schedule following on our previous work with new mechanism to satisfy operator capacity constraints. The method has been implemented using the object-oriented programming language C++ with a user-friendly interface, and numerical testing shows that the method generates high quality schedules in a timely fashion. Through simultaneous consideration of machines and operators, machines and operators are well coordinated to facilitate the smooth flow of parts through the system. The explicit modeling of setups and the associated penalties let parts with same setup requirements clustered together to avoid excessive setups.
NASA Technical Reports Server (NTRS)
Jong, Jen-Yi
1996-01-01
NASA's advanced propulsion system Small Scale Magnetic Disturbances/Advanced Technology Development (SSME/ATD) has been undergoing extensive flight certification and developmental testing, which involves large numbers of health monitoring measurements. To enhance engine safety and reliability, detailed analysis and evaluation of the measurement signals are mandatory to assess its dynamic characteristics and operational condition. Efficient and reliable signal detection techniques will reduce the risk of catastrophic system failures and expedite the evaluation of both flight and ground test data, and thereby reduce launch turn-around time. During the development of SSME, ASRI participated in the research and development of several advanced non- linear signal diagnostic methods for health monitoring and failure prediction in turbomachinery components. However, due to the intensive computational requirement associated with such advanced analysis tasks, current SSME dynamic data analysis and diagnostic evaluation is performed off-line following flight or ground test with a typical diagnostic turnaround time of one to two days. The objective of MSFC's MPP Prototype System is to eliminate such 'diagnostic lag time' by achieving signal processing and analysis in real-time. Such an on-line diagnostic system can provide sufficient lead time to initiate corrective action and also to enable efficient scheduling of inspection, maintenance and repair activities. The major objective of this project was to convert and implement a number of advanced nonlinear diagnostic DSP algorithms in a format consistent with that required for integration into the Vanderbilt Multigraph Architecture (MGA) Model Based Programming environment. This effort will allow the real-time execution of these algorithms using the MSFC MPP Prototype System. ASRI has completed the software conversion and integration of a sequence of nonlinear signal analysis techniques specified in the SOW for real-time execution on MSFC's MPP Prototype. This report documents and summarizes the results of the contract tasks; provides the complete computer source code; including all FORTRAN/C Utilities; and all other utilities/supporting software libraries that are required for operation.
Freire, Sergio Miranda; Teodoro, Douglas; Wei-Kleiner, Fang; Sundvall, Erik; Karlsson, Daniel; Lambrix, Patrick
2016-01-01
This study provides an experimental performance evaluation on population-based queries of NoSQL databases storing archetype-based Electronic Health Record (EHR) data. There are few published studies regarding the performance of persistence mechanisms for systems that use multilevel modelling approaches, especially when the focus is on population-based queries. A healthcare dataset with 4.2 million records stored in a relational database (MySQL) was used to generate XML and JSON documents based on the openEHR reference model. Six datasets with different sizes were created from these documents and imported into three single machine XML databases (BaseX, eXistdb and Berkeley DB XML) and into a distributed NoSQL database system based on the MapReduce approach, Couchbase, deployed in different cluster configurations of 1, 2, 4, 8 and 12 machines. Population-based queries were submitted to those databases and to the original relational database. Database size and query response times are presented. The XML databases were considerably slower and required much more space than Couchbase. Overall, Couchbase had better response times than MySQL, especially for larger datasets. However, Couchbase requires indexing for each differently formulated query and the indexing time increases with the size of the datasets. The performances of the clusters with 2, 4, 8 and 12 nodes were not better than the single node cluster in relation to the query response time, but the indexing time was reduced proportionally to the number of nodes. The tested XML databases had acceptable performance for openEHR-based data in some querying use cases and small datasets, but were generally much slower than Couchbase. Couchbase also outperformed the response times of the relational database, but required more disk space and had a much longer indexing time. Systems like Couchbase are thus interesting research targets for scalable storage and querying of archetype-based EHR data when population-based use cases are of interest. PMID:26958859
Freire, Sergio Miranda; Teodoro, Douglas; Wei-Kleiner, Fang; Sundvall, Erik; Karlsson, Daniel; Lambrix, Patrick
2016-01-01
This study provides an experimental performance evaluation on population-based queries of NoSQL databases storing archetype-based Electronic Health Record (EHR) data. There are few published studies regarding the performance of persistence mechanisms for systems that use multilevel modelling approaches, especially when the focus is on population-based queries. A healthcare dataset with 4.2 million records stored in a relational database (MySQL) was used to generate XML and JSON documents based on the openEHR reference model. Six datasets with different sizes were created from these documents and imported into three single machine XML databases (BaseX, eXistdb and Berkeley DB XML) and into a distributed NoSQL database system based on the MapReduce approach, Couchbase, deployed in different cluster configurations of 1, 2, 4, 8 and 12 machines. Population-based queries were submitted to those databases and to the original relational database. Database size and query response times are presented. The XML databases were considerably slower and required much more space than Couchbase. Overall, Couchbase had better response times than MySQL, especially for larger datasets. However, Couchbase requires indexing for each differently formulated query and the indexing time increases with the size of the datasets. The performances of the clusters with 2, 4, 8 and 12 nodes were not better than the single node cluster in relation to the query response time, but the indexing time was reduced proportionally to the number of nodes. The tested XML databases had acceptable performance for openEHR-based data in some querying use cases and small datasets, but were generally much slower than Couchbase. Couchbase also outperformed the response times of the relational database, but required more disk space and had a much longer indexing time. Systems like Couchbase are thus interesting research targets for scalable storage and querying of archetype-based EHR data when population-based use cases are of interest.
Dymond, J R; Davies-Colley, R J; Hughes, A O; Matthaei, C D
2017-12-15
Deforestation in New Zealand has led to increased soil erosion and sediment loads in rivers. Increased suspended fine sediment in water reduces visual clarity for humans and aquatic animals and reduces penetration of photosynthetically available radiation to aquatic plants. To mitigate fine-sediment impacts in rivers, catchment-wide approaches to reducing soil erosion are required. Targeting soil conservation for reducing sediment loads in rivers is possible through existing models; however, relationships between sediment loads and sediment-related attributes of water that affect both ecology and human uses of water are poorly understood. We present methods for relating sediment loads to sediment concentration, visual clarity, and euphotic depth. The methods require upwards of twenty concurrent samples of sediment concentration, visual clarity, and euphotic depth at a river site where discharge is measured continuously. The sediment-related attributes are related to sediment concentration through regressions. When sediment loads are reduced by soil conservation action, percentiles of sediment concentration are necessarily reduced, and the corresponding percentiles of visual clarity and euphotic depth are increased. The approach is demonstrated on the Wairua River in the Northland region of New Zealand. For this river we show that visual clarity would increase relatively by approximately 1.4 times the relative reduction of sediment load. Median visual clarity would increase from 0.75m to 1.25m (making the river more often suitable for swimming) after a sediment load reduction of 50% associated with widespread soil conservation on pastoral land. Likewise euphotic depth would increase relatively by approximately 0.7 times the relative reduction of sediment load, and the median euphotic depth would increase from 1.5m to 2.0m with a 50% sediment load reduction. Copyright © 2017 Elsevier B.V. All rights reserved.
Farmery, A D; Hahn, C E
2000-08-01
Tidal ventilation gas-exchange models in respiratory physiology and medicine not only require solution of mass balance equations breath-by-breath but also may require within-breath measurements, which are instantaneous functions of time. This demands a degree of temporal resolution and fidelity of integration of gas flow and concentration signals that cannot be provided by most clinical gas analyzers because of their slow response times. We have characterized the step responses of the Datex Ultima (Datex Instrumentation, Helsinki, Finland) gas analyzer to oxygen, carbon dioxide, and nitrous oxide in terms of a Gompertz four-parameter sigmoidal function. By inversion of this function, we were able to reduce the rise times for all these gases almost fivefold, and, by its application to real on-line respiratory gas signals, it is possible to achieve a performance comparable to the fastest mass spectrometers. With the use of this technique, measurements required for non-steady-state and tidal gas-exchange models can be made easily and reliably in the clinical setting.
Hellmann, A; Hering, T; Andres, J
2018-06-01
New patients in the secondary respiratory care require more time for the first consultation and place a higher diagnostic and therapeutic demand if compared to patients already in chronic care. More diagnostic procedures and patient's education by the team are required. No such burden is observed regarding differential degrees of severity of respiratory diseases, e. g. COPD. The overall demands add up to twice the demands of patients already in care. Thus the time required for the treatment of 50 new patients allows consultations for 100 patients already known in the office.As additional time and effort for new patients is not adequately represented in the German medical tax (EBM) a trend to risk selection and a preference for control patients is observed. In contrast incentives to foster treatment of new patients could be an effective measure to dramatically reduce waiting time for visits with pulmonologists. This should be achieved by changes in the German medical tax (EBM). © Georg Thieme Verlag KG Stuttgart · New York.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-27
... cancer types and are often amplified in women's health when we look at breast cancer and gynecologic... screening services for breast and gynecologic cancers--including, but not limited to, benefits, timing... Technology; Announcement of Requirements and Registration for Reducing Cancer Among Women of Color Challenge...
Collaboration, Technology, and Outsourcing Initiatives in Higher Education: A Literature Review.
ERIC Educational Resources Information Center
Kaganoff, Tessa
This report presents a sector-wide review of three types of cost-containment initiatives. The first, collaboration, allows for the sharing of resources, facilitates joint purchasing agreements, reduces duplication of services, and expands personal and professional contacts, but requires time to develop institutional relationships. The second,…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-31
... detection instrumentation to operable status; establish alternate methods of monitoring RCS leakage when one... the RCS leakage detection instrumentation. These changes are consistent with NRC-approved Revision 3... requirements for the RCS leakage detection instrumentation and reduces the time allowed for the plant to...
Modifications Of Hydrostatic-Bearing Computer Program
NASA Technical Reports Server (NTRS)
Hibbs, Robert I., Jr.; Beatty, Robert F.
1991-01-01
Several modifications made to enhance utility of HBEAR, computer program for analysis and design of hydrostatic bearings. Modifications make program applicable to more realistic cases and reduce time and effort necessary to arrive at a suitable design. Uses search technique to iterate on size of orifice to obtain required pressure ratio.
Reducing the Time for IRB Reviews: A Case Study
ERIC Educational Resources Information Center
Liberale, Andrea Pescina; Kovach, Jamison V.
2017-01-01
Research activities often involve enrolling human subjects as volunteers to participate in research studies. Federal regulations mandate that research institutions are responsible for protecting the ethical rights and welfare of human subjects from research risks. This is usually accomplished by requiring approval of research protocols by an…
DOT National Transportation Integrated Search
1981-01-01
This report describes a method for locating historic site information using a computer graphics program. If adopted for use by the Virginia Department of Highways and Transportation, this method should significantly reduce the time now required to de...
40 CFR 63.10448 - What definitions apply to this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) National Emission Standards for Hospital Ethylene Oxide Sterilizers Other Requirements and... Clean Air Act (CAA), in 40 CFR 63.2, and in this section as follows: Aeration process means any time... equipment that reduces the quantity of ethylene oxide in the effluent gas stream from sterilization and...
40 CFR 63.10448 - What definitions apply to this subpart?
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) National Emission Standards for Hospital Ethylene Oxide Sterilizers Other Requirements and... Clean Air Act (CAA), in 40 CFR 63.2, and in this section as follows: Aeration process means any time... equipment that reduces the quantity of ethylene oxide in the effluent gas stream from sterilization and...
Flipping Quantitative Classes: A Triple Win
ERIC Educational Resources Information Center
Swart, William; Wuensch, Karl L.
2016-01-01
In the "flipped" class, students use online materials to learn what is traditionally learned by attending lectures, and class time is used for interactive group learning. A required quantitative business class was taught as a flipped classroom in an attempt to improve student satisfaction in the course and reduce the "transactional…
Microscale and Compact Scale Chemistry in South Africa
ERIC Educational Resources Information Center
Taylor, Warwick
2011-01-01
Reduced costs and greater time efficiency are often quoted among the main benefits of microscale chemistry. Do these benefits outweigh some of the limitations and difficulties faced in terms of students needing to develop new manipulation skills, and teachers requiring training in terms of implementation and management? This article describes a…
Reconstructing the historic demography of an endangered seabird
Steven R. Beissinger; Zachariah M. Peery
2007-01-01
Reducing extinction risk for threatened species requires determining which demographic parameters are depressed and causing population declines. Museum collections may constitute a unique, underutilized resource for measuring demographic changes over long time periods using age-ratio analysis. We reconstruct the historic demography of a U.S. federally endangered...
to rapidly test /screen breast cancer therapeutics as a strategy to streamline drug development and provide individualized treatment. The results...system can therefore be used to streamline pre-clinical drug development, by reducing the number of animals , cost, and time required to screen new drugs
Effect of sintering methods and temperatures on porosity of the ceramics from aluminum oxinitride
NASA Astrophysics Data System (ADS)
Prosvirnin, D. V.; Kolmakov, A. G.; Larionov, M. D.; Prutskov, M. E.; Alikhanyan, A. S.; Samokhin, A. V.; Lysenkov, A. S.; Titov, D. D.
2018-04-01
The paper presents the results of studies of the effect of temperature regimes and time on porosity in ceramic samples made of aluminum oxynitride. Getting rid of the porous structure allows reducing the scattering of rays and, as a result, achieving the required optical characteristics.
78 FR 28152 - Airworthiness Directives; Airbus Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-14
... series airplanes. The existing AD currently requires repetitive inspections of the 80VU rack lower lateral fittings for damage; repetitive inspections of the 80VU rack lower central support for cracking... fittings of the 80VU rack. This proposed AD would reduce the inspection compliance time, add an inspection...
USDA-ARS?s Scientific Manuscript database
Baled silage production provides benefits to farmers because it reduces leaf losses, and requires a shorter wilting time, thereby limiting risks of exposure to rain compared with making hay. Our objective was to investigate the correlation of alfalfa silage fermentation parameters with intake and di...
Radio-Frequency Identification: Asset Control at Your Fingertips
ERIC Educational Resources Information Center
Scholes, Marcus
2009-01-01
Times are tough for everyone, including public school districts. During the past decade, school districts have faced the dual challenges of tightening budgets and increasing fiscal responsibility and oversight. Many school districts have found a way to manage their assets, reduce staff requirements, increase accountability, and save money on…
A digital indicator for maximum windspeeds.
William B. Fowler
1969-01-01
A simple device for indicating maximum windspeed during a time interval is described. Use of a unijunction transistor, for voltage sensing, results in a stable comparison circuit and also reduces overall component requirements. Measurement is presented digitally in 1-mile-per-hour increments over the range of 0-51 m.p.h.
Limited-memory adaptive snapshot selection for proper orthogonal decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill
2015-04-02
Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less
Evaluation of ceiling lifts: transfer time, patient comfort and staff perceptions.
Alamgir, Hasanat; Li, Olivia Wei; Yu, Shicheng; Gorman, Erin; Fast, Catherine; Kidd, Catherine
2009-09-01
Mechanical lifting devices have been developed to reduce healthcare worker injuries related to patient handling. The purpose of this study was to evaluate ceiling lifts in comparison to floor lifts based on transfer time, patient comfort and staff perceptions in three long-term care facilities with varying ceiling lift coverage. The time required to transfer or reposition patients along with patient comfort levels were recorded for 119 transfers. Transfers performed with ceiling lifts required on average less time (bed to chair transfers: 156.9 seconds for ceiling lift, 273.6 seconds for floor lift) and were found to be more comfortable for patients. In the three facilities, 143 healthcare workers were surveyed on their perceptions of patient handling tasks and equipment. For both transferring and repositioning tasks, staff preferred to use ceiling lifts and also found them to be less physically demanding. Further investigation is needed on repositioning tasks to ensure safe practice.
NASA Astrophysics Data System (ADS)
Kim, Ronny Yongho; Jung, Inuk; Kim, Young Yong
IEEE 802.16m is an advanced air interface standard which is under development for IMT-Advanced systems, known as 4G systems. IEEE 802.16m is designed to provide a high data rate and a Quality of Service (QoS) level in order to meet user service requirements, and is especially suitable for mobilized environments. There are several factors that have great impact on such requirements. As one of the major factors, we mainly focus on latency issues. In IEEE 802.16m, an enhanced layer 2 handover scheme, described as Entry Before Break (EBB) was proposed and adopted to reduce handover latency. EBB provides significant handover interruption time reduction with respect to the legacy IEEE 802.16 handover scheme. Fast handovers for mobile IPv6 (FMIPv6) was standardized by Internet Engineering Task Force (IETF) in order to provide reduced handover interruption time from IP layer perspective. Since FMIPv6 utilizes link layer triggers to reduce handover latency, it is very critical to jointly design FMIPv6 with its underlying link layer protocol. However, FMIPv6 based on new handover scheme, EBB has not been proposed. In this paper, we propose an improved cross-layering design for FMIPv6 based on the IEEE 802.16m EBB handover. In comparison with the conventional FMIPv6 based on the legacy IEEE 802.16 network, the overall handover interruption time can be significantly reduced by employing the proposed design. Benefits of this improvement on latency reduction for mobile user applications are thoroughly investigated with both numerical analysis and simulation on various IP applications.
Brown, Michael J; Kor, Daryl J; Curry, Timothy B; Marmor, Yariv; Rohleder, Thomas R
2015-01-01
Transfer of intensive care unit (ICU) patients to the operating room (OR) is a resource-intensive, time-consuming process that often results in patient throughput inefficiencies, deficiencies in information transfer, and suboptimal nurse to patient ratios. This study evaluates the implementation of a coordinated patient transport system (CPTS) designed to address these issues. Using data from 1,557 patient transfers covering the 2006-2010 period, interrupted time series and before and after designs were used to analyze the effect of implementing a CPTS at Mayo Clinic, Rochester. Using a segmented regression for the interrupted time series, on-time OR start time deviations were found to be significantly lower after the implementation of CPTS (p < .0001). The implementation resulted in a fourfold improvement in on-time OR starts (p < .01) while significantly reducing idle OR time (p < .01). A coordinated patient transfer process for moving patient from ICUs to ORs can significantly improve OR efficiency, reduce nonvalue added time, and ensure quality of care by preserving appropriate care provider to patient ratios.
FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting.
Alomar, Miquel L; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L
2016-01-01
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.
FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting
Alomar, Miquel L.; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L.
2016-01-01
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting. PMID:26880876
Study of optoelectronic switch for satellite-switched time-division multiple access
NASA Technical Reports Server (NTRS)
Su, Shing-Fong; Jou, Liz; Lenart, Joe
1987-01-01
The use of optoelectronic switching for satellite switched time division multiple access will improve the isolation and reduce the crosstalk of an IF switch matrix. The results are presented of a study on optoelectronic switching. Tasks include literature search, system requirements study, candidate switching architecture analysis, and switch model optimization. The results show that the power divided and crossbar switching architectures are good candidates for an IF switch matrix.
Materials Genome Initiative Element
NASA Technical Reports Server (NTRS)
Vickers, John
2015-01-01
NASA is committed to developing new materials and manufacturing methods that can enable new missions with ever increasing mission demands. Typically, the development and certification of new materials and manufacturing methods in the aerospace industry has required more than 20 years of development time with a costly testing and certification program. To reduce the cost and time to mature these emerging technologies, NASA is developing computational materials tools to improve understanding of the material and guide the certification process.
NASA Astrophysics Data System (ADS)
Inochkin, F. M.; Pozzi, P.; Bezzubik, V. V.; Belashenkov, N. R.
2017-06-01
Superresolution image reconstruction method based on the structured illumination microscopy (SIM) principle with reduced and simplified pattern set is presented. The method described needs only 2 sinusoidal patterns shifted by half a period for each spatial direction of reconstruction, instead of the minimum of 3 for the previously known methods. The method is based on estimating redundant frequency components in the acquired set of modulated images. Digital processing is based on linear operations. When applied to several spatial orientations, the image set can be further reduced to a single pattern for each spatial orientation, complemented by a single non-modulated image for all the orientations. By utilizing this method for the case of two spatial orientations, the total input image set is reduced up to 3 images, providing up to 2-fold improvement in data acquisition time compared to the conventional 3-pattern SIM method. Using the simplified pattern design, the field of view can be doubled with the same number of spatial light modulator raster elements, resulting in a total 4-fold increase in the space-time product. The method requires precise knowledge of the optical transfer function (OTF). The key limitation is the thickness of object layer that scatters or emits light, which requires to be sufficiently small relatively to the lens depth of field. Numerical simulations and experimental results are presented. Experimental results are obtained on the SIM setup with the spatial light modulator based on the 1920x1080 digital micromirror device.
Advanced access: reducing waiting and delays in primary care.
Murray, Mark; Berwick, Donald M
2003-02-26
Delay of care is a persistent and undesirable feature of current health care systems. Although delay seems to be inevitable and linked to resource limitations, it often is neither. Rather, it is usually the result of unplanned, irrational scheduling and resource allocation. Application of queuing theory and principles of industrial engineering, adapted appropriately to clinical settings, can reduce delay substantially, even in small practices, without requiring additional resources. One model, sometimes referred to as advanced access, has increasingly been shown to reduce waiting times in primary care. The core principle of advanced access is that patients calling to schedule a physician visit are offered an appointment the same day. Advanced access is not sustainable if patient demand for appointments is permanently greater than physician capacity to offer appointments. Six elements of advanced access are important in its application balancing supply and demand, reducing backlog, reducing the variety of appointment types, developing contingency plans for unusual circumstances, working to adjust demand profiles, and increasing the availability of bottleneck resources. Although these principles are powerful, they are counter to deeply held beliefs and established practices in health care organizations. Adopting these principles requires strong leadership investment and support.
Artificial intelligence techniques for scheduling Space Shuttle missions
NASA Technical Reports Server (NTRS)
Henke, Andrea L.; Stottler, Richard H.
1994-01-01
Planning and scheduling of NASA Space Shuttle missions is a complex, labor-intensive process requiring the expertise of experienced mission planners. We have developed a planning and scheduling system using combinations of artificial intelligence knowledge representations and planning techniques to capture mission planning knowledge and automate the multi-mission planning process. Our integrated object oriented and rule-based approach reduces planning time by orders of magnitude and provides planners with the flexibility to easily modify planning knowledge and constraints without requiring programming expertise.
Logistics Reduction and Repurposing Beyond Low Earth Orbit
NASA Technical Reports Server (NTRS)
Ewert, Michael K.; Broyan, James L., Jr.
2012-01-01
All human space missions, regardless of destination, require significant logistical mass and volume that is strongly proportional to mission duration. Anything that can be done to reduce initial mass and volume of supplies or reuse items that have been launched will be very valuable. Often, the logistical items require disposal and represent a trash burden. Logistics contributions to total mission architecture mass can be minimized by considering potential reuse using systems engineering analysis. In NASA's Advanced Exploration Systems "Logistics Reduction and Repurposing Project," various tasks will reduce the intrinsic mass of logistical packaging, enable reuse and repurposing of logistical packaging and carriers for other habitation, life support, crew health, and propulsion functions, and reduce or eliminate the nuisance aspects of trash at the same time. Repurposing reduces the trash burden and eliminates the need for hardware whose function can be provided by use of spent logistical items. However, these reuse functions need to be identified and built into future logical systems to enable them to effectively have a secondary function. These technologies and innovations will help future logistics systems to support multiple exploration missions much more efficiently.
Non-functional Avionics Requirements
NASA Astrophysics Data System (ADS)
Paulitsch, Michael; Ruess, Harald; Sorea, Maria
Embedded systems in aerospace become more and more integrated in order to reduce weight, volume/size, and power of hardware for more fuel-effi ciency. Such integration tendencies change architectural approaches of system ar chi tec tures, which subsequently change non-functional requirements for plat forms. This paper provides some insight into state-of-the-practice of non-func tional requirements for developing ultra-critical embedded systems in the aero space industry, including recent changes and trends. In particular, formal requi re ment capture and formal analysis of non-functional requirements of avionic systems - including hard-real time, fault-tolerance, reliability, and per for mance - are exemplified by means of recent developments in SAL and HiLiTE.
Ordway, Gregory A; Jia, Weihong; Li, Jing; Zhu, Meng-Yang; Mandela, Prashant; Pan, Jun
2005-04-30
Previous research has shown that exposure of norepinephrine transporter (NET)-expressing cells to desipramine (DMI) downregulates the norepinephrine transporter, although changes in the several transporter parameters do not demonstrate the same time course. Exposures to desipramine for <1 day reduces only radioligand binding and uptake capacity while transporter-immunoreactivity is unaffected. Recent demonstration of persistent drug retention in cells following desipramine exposures raises the possibility that previous reported changes in the norepinephrine transporter may be partly accountable by residual drug. In this study, potential effects of residual desipramine on norepinephrine transporter binding and uptake were re-evaluated following exposures of PC12 cells to desipramine using different methods to remove residual drug. Using a method that minimizes residual drug, exposure of intact PC12 cells to desipramine for 4h had no effect on uptake capacity or [(3)H]nisoxetine binding to the norepinephrine transporter, while exposures for > or =16 h reduced uptake capacity. Desipramine-induced reductions in binding to the transporter required >24 h or greater periods of desipramine exposure. This study confirms that uptake capacity of the norepinephrine transporter is reduced earlier than changes in radioligand binding, but with a different time course than originally shown. Special pre-incubation procedures are required to abolish effects of residual transporter inhibitor when studying inhibitor-induced transporter regulation.
Accelerating statistical image reconstruction algorithms for fan-beam x-ray CT using cloud computing
NASA Astrophysics Data System (ADS)
Srivastava, Somesh; Rao, A. Ravishankar; Sheinin, Vadim
2011-03-01
Statistical image reconstruction algorithms potentially offer many advantages to x-ray computed tomography (CT), e.g. lower radiation dose. But, their adoption in practical CT scanners requires extra computation power, which is traditionally provided by incorporating additional computing hardware (e.g. CPU-clusters, GPUs, FPGAs etc.) into a scanner. An alternative solution is to access the required computation power over the internet from a cloud computing service, which is orders-of-magnitude more cost-effective. This is because users only pay a small pay-as-you-go fee for the computation resources used (i.e. CPU time, storage etc.), and completely avoid purchase, maintenance and upgrade costs. In this paper, we investigate the benefits and shortcomings of using cloud computing for statistical image reconstruction. We parallelized the most time-consuming parts of our application, the forward and back projectors, using MapReduce, the standard parallelization library on clouds. From preliminary investigations, we found that a large speedup is possible at a very low cost. But, communication overheads inside MapReduce can limit the maximum speedup, and a better MapReduce implementation might become necessary in the future. All the experiments for this paper, including development and testing, were completed on the Amazon Elastic Compute Cloud (EC2) for less than $20.
Reducing waste and errors: piloting lean principles at Intermountain Healthcare.
Jimmerson, Cindy; Weber, Dorothy; Sobek, Durward K
2005-05-01
The Toyota Production System (TPS), based on industrial engineering principles and operational innovations, is used to achieve waste reduction and efficiency while increasing product quality. Several key tools and principles, adapted to health care, have proved effective in improving hospital operations. Value Stream Maps (VSMs), which represent the key people, material, and information flows required to deliver a product or service, distinguish between value-adding and non-value-adding steps. The one-page Problem-Solving A3 Report guides staff through a rigorous and systematic problem-solving process. PILOT PROJECT at INTERMOUNTAIN HEALTHCARE: In a pilot project, participants made many improvements, ranging from simple changes implemented immediately (for example, heart monitor paper not available when a patient presented with a dysrythmia) to larger projects involving patient or information flow issues across multiple departments. Most of the improvements required little or no investment and reduced significant amounts of wasted time for front-line workers. In one unit, turnaround time for pathologist reports from an anatomical pathology lab was reduced from five to two days. TPS principles and tools are applicable to an endless variety of processes and work settings in health care and can be used to address critical challenges such as medical errors, escalating costs, and staffing shortages.
Preconditioned MoM Solutions for Complex Planar Arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fasenfest, B J; Jackson, D; Champagne, N
2004-01-23
The numerical analysis of large arrays is a complex problem. There are several techniques currently under development in this area. One such technique is the FAIM (Faster Adaptive Integral Method). This method uses a modification of the standard AIM approach which takes into account the reusability properties of matrices that arise from identical array elements. If the array consists of planar conducting bodies, the array elements are meshed using standard subdomain basis functions, such as the RWG basis. These bases are then projected onto a regular grid of interpolating polynomials. This grid can then be used in a 2D ormore » 3D FFT to accelerate the matrix-vector product used in an iterative solver. The method has been proven to greatly reduce solve time by speeding the matrix-vector product computation. The FAIM approach also reduces fill time and memory requirements, since only the near element interactions need to be calculated exactly. The present work extends FAIM by modifying it to allow for layered material Green's Functions and dielectrics. In addition, a preconditioner is implemented to greatly reduce the number of iterations required for a solution. The general scheme of the FAIM method is reported in; this contribution is limited to presenting new results.« less
Performance characteristics of a slagging gasifier for MHD combustor systems
NASA Technical Reports Server (NTRS)
Smith, K. O.
1979-01-01
The performance of a two stage, coal combustor concept for magnetohydrodynamic (MHD) systems was investigated analytically. The two stage MHD combustor is comprised of an entrained flow, slagging gasifier as the first stage, and a gas phase reactor as the second stage. The first stage was modeled by assuming instantaneous coal devolatilization, and volatiles combustion and char gasification by CO2 and H2O in plug flow. The second stage combustor was modeled assuming adiabatic instantaneous gas phase reactions. Of primary interest was the dependence of char gasification efficiency on first stage particle residence time. The influence of first stage stoichiometry, heat loss, coal moisture, coal size distribution, and degree of coal devolatilization on gasifier performance and second stage exhaust temperature was determined. Performance predictions indicate that particle residence times on the order of 500 msec would be required to achieve gasification efficiencies in the range of 90 to 95 percent. The use of a finer coal size distribution significantly reduces the required gasifier residence time for acceptable levels of fuel use efficiency. Residence time requirements are also decreased by increased levels of coal devolatilization. Combustor design efforts should maximize devolatilization by minimizing mixing times associated with coal injection.
Yet one more dwell time algorithm
NASA Astrophysics Data System (ADS)
Haberl, Alexander; Rascher, Rolf
2017-06-01
The current demand of even more powerful and efficient microprocessors, for e.g. deep learning, has led to an ongoing trend of reducing the feature size of the integrated circuits. These processors are patterned with EUV-lithography which enables 7 nm chips [1]. To produce mirrors which satisfy the needed requirements is a challenging task. Not only increasing requirements on the imaging properties, but also new lens shapes, such as aspheres or lenses with free-form surfaces, require innovative production processes. However, these lenses need new deterministic sub-aperture polishing methods that have been established in the past few years. These polishing methods are characterized, by an empirically determined TIF and local stock removal. Such a deterministic polishing method is ion-beam-figuring (IBF). The beam profile of an ion beam is adjusted to a nearly ideal Gaussian shape by various parameters. With the known removal function, a dwell time profile can be generated for each measured error profile. Such a profile is always generated pixel-accurately to the predetermined error profile, with the aim always of minimizing the existing surface structures up to the cut-off frequency of the tool used [2]. The processing success of a correction-polishing run depends decisively on the accuracy of the previously computed dwell-time profile. So the used algorithm to calculate the dwell time has to accurately reflect the reality. But furthermore the machine operator should have no influence on the dwell-time calculation. Conclusively there mustn't be any parameters which have an influence on the calculation result. And lastly it should take a minimum of machining time to get a minimum of remaining error structures. Unfortunately current dwell time algorithm calculations are divergent, user-dependent, tending to create high processing times and need several parameters to bet set. This paper describes an, realistic, convergent and user independent dwell time algorithm. The typical processing times are reduced to about 80 % up to 50 % compared to conventional algorithms (Lucy-Richardson, Van-Cittert …) as used in established machines. To verify its effectiveness a plane surface was machined on an IBF.
NASA Astrophysics Data System (ADS)
Sedlak, Kamil; Bruzzone, Pierluigi
2015-12-01
In the design of future DEMO fusion reactor a long time constant (∼23 s) is required for an emergency current dump in the toroidal field (TF) coils, e.g. in case of a quench detection. This requirement is driven mainly by imposing a limit on forces on mechanical structures, namely on the vacuum vessel. As a consequence, the superconducting cable-in-conduit conductors (CICC) of the TF coil have to withstand heat dissipation lasting tens of seconds at the section where the quench started. During that time, the heat will be partially absorbed by the (massive) steel conduit and electrical insulation, thus reducing the hot-spot temperature estimated strictly from the enthalpy of the strand bundle. A dedicated experiment has been set up at CRPP to investigate the radial heat propagation and the hot-spot temperature in a CICC with a 10 mm thick steel conduit and a 2 mm thick glass epoxy outer electrical insulation. The medium size, ∅ = 18 mm, NbTi CICC was powered by the operating current of up to 10 kA. The temperature profile was monitored by 10 temperature sensors. The current dump conditions, namely the decay time constant and the quench detection delay, were varied. The experimental results show that the thick conduit significantly contributes to the overall enthalpy balance, and consequently reduces the amount of copper required for the quench protection in superconducting cables for fusion reactors.
NASA Astrophysics Data System (ADS)
Werner, C. L.; Wegmuller, U.; Strozzi, T.; Wiesmann, A.
2006-12-01
Principle contributors to the noise in differential SAR interferograms are temporal phase stability of the surface, geometry relating to baseline and surface slope, and propagation path delay variations due to tropospheric water vapor and the ionosphere. Time series analysis of multiple interferograms generated from a stack of SAR SLC images seeks to determine the deformation history of the surface while reducing errors. Only those scatterers within a resolution element that are stable and coherent for each interferometric pair contribute to the desired deformation signal. Interferograms with baselines exceeding 1/3 the critical baseline have substantial geometrical decorrelation for distributed targets. Short baseline pairs with multiple reference scenes can be combined using least-squares estimation to obtain a global deformation solution. Alternately point-like persistent scatterers can be identified in scenes that do not exhibit geometrical decorrelation associated with large baselines. In this approach interferograms are formed from a stack of SAR complex images using a single reference scene. Stable distributed scatter pixels are excluded however due to the presence of large baselines. We apply both point- based and short-baseline methodologies and compare results for a stack of fine-beam Radarsat data acquired in 2002-2004 over a rapidly subsiding oil field near Lost Hills, CA. We also investigate the density of point-like scatters with respect to image resolution. The primary difficulty encountered when applying time series methods is phase unwrapping errors due to spatial and temporal gaps. Phase unwrapping requires sufficient spatial and temporal sampling. Increasing the SAR range bandwidth increases the range resolution as well as increasing the critical interferometric baseline that defines the required satellite orbital tube diameter. Sufficient spatial sampling also permits unwrapping because of the reduced phase/pixel gradient. Short time intervals further reduce the differential phase due to deformation when the deformation is continuous. Lower frequency systems (L- vs. C-Band) substantially improve the ability to unwrap the phase correctly by directly reducing both interferometric phase amplitude and temporal decorrelation.
Bosco, Laura; Zhou, Cheng; Murdoch, John A C; Bicknell, Ryan; Hopman, Wilma M; Phelan, Rachel; Shyam, Vidur
2017-10-01
Arthroscopic shoulder surgery can be performed with an interscalene brachial plexus block (ISBPB) alone, ISBPB combined with general anesthesia (GA), or GA alone. Postoperative pain is typically managed with opioids; however, both GA and opioids have adverse effects which can delay discharge. This retrospective study compares the efficacy of four methods of anesthesia management for arthroscopic shoulder surgery. Charts of all patients who underwent shoulder surgery by a single surgeon from 2012-2015 were categorized by analgesic regimen: GA only (n = 177), single-shot ISBPB only (n = 124), or pre- vs postoperative ISBPB combined with GA (ISBPB + GA [n = 72] vs GA + ISBPB [n = 52], respectively). The primary outcome measure was the time to discharge from the postanesthesia care unit (PACU). Mean (SD) time in the PACU ranged from 70.5 (39.9) min for ISBPB only to 111.2 (56.9) min for GA only. Use of ISBPB in any combination and regardless of timing resulted in significantly reduced PACU time, with a mean drop of 27.2 min (95% confidence interval [CI], 17.3 to 37.2; P < 0.001). The largest mean pairwise difference was between GA only and ISBPB only, with a mean difference of 40.7 min (95% CI, 25.5 to 55.8; P < 0.001). Use of ISBPB also reduced pain upon arrival at the PACU and, in some cases, upon discharge from the PACU (i.e., ISBPB only but not ISBPB + GA compared with GA). An ISBPB (alone or prior to GA) also reduced analgesic requirements. Previously reported benefits of an ISBPB for arthroscopic shoulder surgery are confirmed. Postoperative ISBPBs may also be beneficial for reducing pain and opioid requirements and could be targeted for patients in severe pain upon emergence. A sufficiently powered randomized-controlled trial could determine the relative efficacy, safety, and associated financial implications associated with each method.
NASA Astrophysics Data System (ADS)
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
Mirzaei, Tayebeh; Oskouie, Fatemeh; Rafii, Forough
2012-03-01
In the course of their studies, nursing students must learn many skills and acquire the knowledge required for their future profession. This study investigates how Iranian nursing students manage their time according to the circumstances and obstacles of their academic field. Research was conducted using the grounded theory method. Twenty-one nursing students were purposefully chosen as participants. Data was collected through semi-structured interviews and analyzed using the method suggested by Corbin and Strauss. One of the three processes that the nursing students used was "unidirectional time management." This pattern consists of accepting the nursing field, overcoming uncertainty, assessing conditions, feeling stress, and trying to reduce stress and create satisfaction. It was found that students allotted most of their time to academic tasks in an attempt to overcome their stress. The findings of this study indicate the need for these students to have time for the extra-curricular activities and responsibilities that are appropriate to their age. © 2012 Blackwell Publishing Asia Pty Ltd.
NASA Technical Reports Server (NTRS)
Bosworth, John T.; Burken, John J.
1997-01-01
Safety and productivity of the initial flight test phase of a new vehicle have been enhanced by developing the ability to measure the stability margins of the combined control system and vehicle in flight. One shortcoming of performing this analysis is the long duration of the excitation signal required to provide results over a wide frequency range. For flight regimes such as high angle of attack or hypersonic flight, the ability to maintain flight condition for this time duration is difficult. Significantly reducing the required duration of the excitation input is possible by tailoring the input to excite only the frequency range where the lowest stability margin is expected. For a multiple-input/multiple-output system, the inputs can be simultaneously applied to the control effectors by creating each excitation input with a unique set of frequency components. Chirp-Z transformation algorithms can be used to match the analysis of the results to the specific frequencies used in the excitation input. This report discusses the application of a tailored excitation input to a high-fidelity X-31A linear model and nonlinear simulation. Depending on the frequency range, the results indicate the potential to significantly reduce the time required for stability measurement.
Fast and Non-Toxic In Situ Hybridization without Blocking of Repetitive Sequences
Matthiesen, Steen H.; Hansen, Charles M.
2012-01-01
Formamide is the preferred solvent to lower the melting point and annealing temperature of nucleic acid strands in in situ hybridization (ISH). A key benefit of formamide is better preservation of morphology due to a lower incubation temperature. However, in fluorescence in situ hybridization (FISH), against unique DNA targets in tissue sections, an overnight hybridization is required to obtain sufficient signal intensity. Here, we identified alternative solvents and developed a new hybridization buffer that reduces the required hybridization time to one hour (IQFISH method). Remarkably, denaturation and blocking against repetitive DNA sequences to prevent non-specific binding is not required. Furthermore, the new hybridization buffer is less hazardous than formamide containing buffers. The results demonstrate a significant increased hybridization rate at a lowered denaturation and hybridization temperature for both DNA and PNA (peptide nucleic acid) probes. We anticipate that these formamide substituting solvents will become the foundation for changes in the understanding and performance of denaturation and hybridization of nucleic acids. For example, the process time for tissue-based ISH for gene aberration tests in cancer diagnostics can be reduced from days to a few hours. Furthermore, the understanding of the interactions and duplex formation of nucleic acid strands may benefit from the properties of these solvents. PMID:22911704
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoneking, M.R.; Lanier, N.E.; Prager, S.C.
1996-12-01
Current profile control is employed in the Madison Symmetric Torus reversed field pinch to reduce the magnetic fluctuations responsible for anomalous transport. An inductive poloidal electric field pulse is applied in the sense to flatten the parallel current profile, reducing the dynamo fluctuation amplitude required to sustain the equilibrium. This technique demonstrates a substantial reduction in fluctuation amplitude (as much as 50%), and improvement in energy confinement (from 1 ms to 5 ms); a record low fluctuation (0.8%) and record high temperature (615 eV) for this device were observed simultaneously during current drive experiments. Plasma beta increases by 50% andmore » the Ohmic input power is three times lower. Particle confinement improves and plasma impurity contamination is reduced. The results of the transient current drive experiments provide motivation for continuing development of steady-state current profile control strategies for the reversed field pinch.« less
NASA Technical Reports Server (NTRS)
Seldner, K.
1976-01-01
The development of control systems for jet engines requires a real-time computer simulation. The simulation provides an effective tool for evaluating control concepts and problem areas prior to actual engine testing. The development and use of a real-time simulation of the Pratt and Whitney F100-PW100 turbofan engine is described. The simulation was used in a multi-variable optimal controls research program using linear quadratic regulator theory. The simulation is used to generate linear engine models at selected operating points and evaluate the control algorithm. To reduce the complexity of the design, it is desirable to reduce the order of the linear model. A technique to reduce the order of the model; is discussed. Selected results between high and low order models are compared. The LQR control algorithms can be programmed on digital computer. This computer will control the engine simulation over the desired flight envelope.
Besley, Nicholas A
2016-10-11
The computational cost of calculations of K-edge X-ray absorption spectra using time-dependent density functional (TDDFT) within the Tamm-Dancoff approximation is significantly reduced through the introduction of a severe integral screening procedure that includes only integrals that involve the core s basis function of the absorbing atom(s) coupled with a reduced quality numerical quadrature for integrals associated with the exchange and correlation functionals. The memory required for the calculations is reduced through construction of the TDDFT matrix within the absorbing core orbitals excitation space and exploiting further truncation of the virtual orbital space. The resulting method, denoted fTDDFTs, leads to much faster calculations and makes the study of large systems tractable. The capability of the method is demonstrated through calculations of the X-ray absorption spectra at the carbon K-edge of chlorophyll a, C 60 and C 70 .
Nuclear Thermal Propulsion Development Risks
NASA Technical Reports Server (NTRS)
Kim, Tony
2015-01-01
There are clear advantages of development of a Nuclear Thermal Propulsion (NTP) for a crewed mission to Mars. NTP for in-space propulsion enables more ambitious space missions by providing high thrust at high specific impulse ((is) approximately 900 sec) that is 2 times the best theoretical performance possible for chemical rockets. Missions can be optimized for maximum payload capability to take more payload with reduced total mass to orbit; saving cost on reduction of the number of launch vehicles needed. Or missions can be optimized to minimize trip time significantly to reduce the deep space radiation exposure to the crew. NTR propulsion technology is a game changer for space exploration to Mars and beyond. However, 'NUCLEAR' is a word that is feared and vilified by some groups and the hostility towards development of any nuclear systems can meet great opposition by the public as well as from national leaders and people in authority. The public often associates the 'nuclear' word with weapons of mass destruction. The development NTP is at risk due to unwarranted public fears and clear honest communication of nuclear safety will be critical to the success of the development of the NTP technology. Reducing cost to NTP development is critical to its acceptance and funding. In the past, highly inflated cost estimates of a full-scale development nuclear engine due to Category I nuclear security requirements and costly regulatory requirements have put the NTP technology as a low priority. Innovative approaches utilizing low enriched uranium (LEU). Even though NTP can be a small source of radiation to the crew, NTP can facilitate significant reduction of crew exposure to solar and cosmic radiation by reducing trip times by 3-4 months. Current Human Mars Mission (HMM) trajectories with conventional propulsion systems and fuel-efficient transfer orbits exceed astronaut radiation exposure limits. Utilizing extra propellant from one additional SLS launch and available energy in the NTP fuel, HMM radiation exposure can be reduced significantly.
Greenwood-Hickman, Mikael Anne; Rosenberg, Dori E; Phelan, Elizabeth A; Fitzpatrick, Annette L
2015-06-11
Physical activity is known to prevent falls; however, use of widely available exercise programs for older adults, including EnhanceFitness and Silver Sneakers, has not been examined in relation to effects on falls among program participants. We aimed to determine whether participation in EnhanceFitness or Silver Sneakers is associated with a reduced risk of falls resulting in medical care. A retrospective cohort study examined a demographically representative sample from a Washington State integrated health system. Health plan members aged 65 or older, including 2,095 EnhanceFitness users, 13,576 Silver Sneakers users, and 55,127 nonusers from 2005 through 2011, were classified as consistent users (used a program ≥2 times in all years they were enrolled in the health plan during the study period); intermittent users (used a program ≥2 times in 1 or more years enrolled but not all years), or nonusers of EnhanceFitness or Silver Sneakers. The main outcome was measured as time-to-first-fall requiring inpatient or out-of-hospital medical treatment based on the International Classification of Diseases, 9th Revision, Clinical Modification, Sixth Edition and E-codes. In fully adjusted Cox proportional hazards models, consistent (hazard ratio [HR], 0.74; 95% confidence interval [CI], 0.63-0.88) and intermittent (HR, 0.87; 95% CI, 0.8-0.94) EnhanceFitness participation were both associated with a reduced risk of falls resulting in medical care. Intermittent Silver Sneakers participation showed a reduced risk (HR, 0.93; 95% CI, 0.90-0.97). Participation in widely available community-based exercise programs geared toward older adults (but not specific to fall prevention) reduced the risk of medical falls. Structured programs that include balance and strength exercise, as EnhanceFitness does, may be effective in reducing fall risk.
Cytosolic thioredoxin reductase 1 is required for correct disulfide formation in the ER.
Poet, Greg J; Oka, Ojore Bv; van Lith, Marcel; Cao, Zhenbo; Robinson, Philip J; Pringle, Marie Anne; Arnér, Elias Sj; Bulleid, Neil J
2017-03-01
Folding of proteins entering the secretory pathway in mammalian cells frequently requires the insertion of disulfide bonds. Disulfide insertion can result in covalent linkages found in the native structure as well as those that are not, so-called non-native disulfides. The pathways for disulfide formation are well characterized, but our understanding of how non-native disulfides are reduced so that the correct or native disulfides can form is poor. Here, we use a novel assay to demonstrate that the reduction in non-native disulfides requires NADPH as the ultimate electron donor, and a robust cytosolic thioredoxin system, driven by thioredoxin reductase 1 (TrxR1 or TXNRD1). Inhibition of this reductive pathway prevents the correct folding and secretion of proteins that are known to form non-native disulfides during their folding. Hence, we have shown for the first time that mammalian cells have a pathway for transferring reducing equivalents from the cytosol to the ER, which is required to ensure correct disulfide formation in proteins entering the secretory pathway. © 2017 The Authors. Published under the terms of the CC BY 4.0 license.
Postoperative morphine requirements of free TRAM and DIEP flaps.
Kroll, S S; Sharma, S; Koutz, C; Langstein, H N; Evans GRD; Robb, G L; Chang, D W; Reece, G P
2001-02-01
In a review of the charts of 158 patients who had undergone breast reconstruction with free transverse rectus abdominis musculocutaneous (TRAM) or deep inferior epigastric perforator (DIEP) flaps and who were treated for postoperative pain with morphine administered by a patient-controlled analgesia pump, the total dose of morphine administered during hospitalization for the flap transfer was measured. Patients whose treatment was supplemented by other intravenous narcotics were excluded from the study. The mean amount of morphine per kilogram required by patients who had reconstruction with DIEP flaps (0.74 mg/kg, n = 26) was found to be significantly less than the amount required by patients who had reconstruction with TRAM flaps (1.65 mg/kg; n = 132; p < 0.001). DIEP flap patients also remained in the hospital less time (mean, 4.73 days) than did free TRAM flap patients (mean, 5.21 days; p = 0.026), but the difference was less than one full hospital day. It was concluded that the use of the DIEP flap does reduce the patient requirement for postoperative pain medication and therefore presumably reduces postoperative pain. It may also slightly shorten hospital stay.
Tabletop computed lighting for practical digital photography.
Mohan, Ankit; Bailey, Reynold; Waite, Jonathan; Tumblin, Jack; Grimm, Cindy; Bodenheimer, Bobby
2007-01-01
We apply simplified image-based lighting methods to reduce the equipment, cost, time, and specialized skills required for high-quality photographic lighting of desktop-sized static objects such as museum artifacts. We place the object and a computer-steered moving-head spotlight inside a simple foam-core enclosure and use a camera to record photos as the light scans the box interior. Optimization, guided by interactive user sketching, selects a small set of these photos whose weighted sum best matches the user-defined target sketch. Unlike previous image-based relighting efforts, our method requires only a single area light source, yet it can achieve high-resolution light positioning to avoid multiple sharp shadows. A reduced version uses only a handheld light and may be suitable for battery-powered field photography equipment that fits into a backpack.
Harmony search optimization algorithm for a novel transportation problem in a consolidation network
NASA Astrophysics Data System (ADS)
Davod Hosseini, Seyed; Akbarpour Shirazi, Mohsen; Taghi Fatemi Ghomi, Seyed Mohammad
2014-11-01
This article presents a new harmony search optimization algorithm to solve a novel integer programming model developed for a consolidation network. In this network, a set of vehicles is used to transport goods from suppliers to their corresponding customers via two transportation systems: direct shipment and milk run logistics. The objective of this problem is to minimize the total shipping cost in the network, so it tries to reduce the number of required vehicles using an efficient vehicle routing strategy in the solution approach. Solving several numerical examples confirms that the proposed solution approach based on the harmony search algorithm performs much better than CPLEX in reducing both the shipping cost in the network and computational time requirement, especially for realistic size problem instances.
Colton, Katharine; Yang, S; Hu, P F; Chen, H H; Bonds, B; Stansbury, L G; Scalea, T M; Stein, D M
2016-05-01
Past work has shown the importance of the "pressure times time dose" (PTD) of intracranial hypertension (intracranial pressure [ICP] > 19 mm Hg) in predicting outcome after severe traumatic brain injury. We used automated data collection to measure the effect of common medications on the duration and dose of intracranial hypertension. Patients >17 years old, admitted and requiring ICP monitoring between 2008 and 2010 at a single, large urban tertiary care facility, were retrospectively enrolled. Timing and dose of ICP-directed therapy were recorded from paper and electronic medical records. The ICP data were collected automatically at 6-second intervals and averaged over 5 minutes. The percentage of time of intracranial hypertension (PTI) and PTD (mm Hg h) were calculated. A total of 98 patients with 664 treatment instances were identified. Baseline PTD ranged from 27 (before administration of propofol and fentanyl) to 150 mm Hg h (before mannitol). A "small" dose of hypertonic saline (HTS; ≤250 mL 3%) reduced PTD by 38% in the first hour and 37% in the second hour and reduced the time with ICP >19 by 38% and 39% after 1 and 2 hours, respectively. A "large" dose of HTS reduced PTD by 40% in the first hour and 63% in the second (PTI reduction of 36% and 50%, respectively). An increased dose of propofol or fentanyl infusion failed to decrease PTD but reduced PTI between 14% (propofol alone) and 30% (combined increase in propofol and fentanyl, after 2 hours). Barbiturates failed to decrease PTD but decreased PTI by 30% up to 2 hours after administration. All reductions reported are significantly changed from baseline, P < .05. Baseline PTD values before drug administration reflects varied patient criticality, with much higher values seen before the use of mannitol or barbiturates. Treatment with HTS reduced PTD and PTI burden significantly more than escalation of sedation or pain management, and this effect remained significant at 2 hours after administration. © The Author(s) 2014.
Multiple Input Design for Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene
2003-01-01
A method for designing multiple inputs for real-time dynamic system identification in the frequency domain was developed and demonstrated. The designed inputs are mutually orthogonal in both the time and frequency domains, with reduced peak factors to provide good information content for relatively small amplitude excursions. The inputs are designed for selected frequency ranges, and therefore do not require a priori models. The experiment design approach was applied to identify linear dynamic models for the F-15 ACTIVE aircraft, which has multiple control effectors.
2016-09-01
the UAV’s reliability in fulfilling the mission as well as the build- time of the UAV. 14. SUBJECT TERMS design , print and operate, DPO...previously. There are opportunities to work on the design of the UAV to reduce the cognitive workload of the service member and time required to “print” and...the need arises to tailor the UAV for the specific mission. The modification of an existing design is expected to take a much shorter time than the
Zhao, Peng; Sun, Jian-Jun; Wu, Tai-Hu
2008-11-01
Real-time monitoring for temperature is required in cold chain for the medical products that are sensible with temperature, such as blood and bacterin, to guarantee the quality and reduce their wastage. This wireless monitoring system in cold chain is developed with Zigbee technology. Functions such as real-time monitoring, analyzing, alarming are realized. The system boasts such characteristics as low power consumption, low cost, big capacity and high reliability, and could improve the capability of real-time monitoring and management in cold chain effectively.
Implementation of RF Circuitry for Real-Time Digital Beam-Forming SAR Calibration Schemes
NASA Technical Reports Server (NTRS)
Horst, Stephen J.; Hoffman, James P.; Perkovic-Martin, Dragana; Shaffer, Scott; Thrivikraman, Tushar; Yates, Phil; Veilleux, Louise
2012-01-01
The SweepSAR architecture for space-borne remote sensing applications is an enabling technology for reducing the temporal baseline of repeat-pass interferometers while maintaining near-global coverage. As part of this architecture, real-time digital beam-forming would be performed on the radar return signals across multiple channels. Preserving the accuracy of the combined return data requires real-time calibration of the transmit and receive RF paths on each channel. This paper covers several of the design considerations necessary to produce a practical implementation of this concept.
The impact of vaporized nanoemulsions on ultrasound-mediated ablation
2013-01-01
Background The clinical feasibility of using high-intensity focused ultrasound (HIFU) for ablation of solid tumors is limited by the high acoustic pressures and long treatment times required. The presence of microbubbles during sonication can increase the absorption of acoustic energy and accelerate heating. However, formation of microbubbles within the tumor tissue remains a challenge. Phase-shift nanoemulsions (PSNE) have been developed as a means for producing microbubbles within tumors. PSNE are emulsions of submicron-sized, lipid-coated, and liquid perfluorocarbon droplets that can be vaporized into microbubbles using short (<1 ms), high-amplitude (>5 MPa) acoustic pulses. In this study, the impact of vaporized phase-shift nanoemulsions on the time and acoustic power required for HIFU-mediated thermal lesion formation was investigated in vitro. Methods PSNE containing dodecafluoropentane were produced with narrow size distributions and mean diameters below 200 nm using a combination of sonication and extrusion. PSNE was dispersed in albumin-containing polyacrylamide gel phantoms for experimental tests. Albumin denatures and becomes opaque at temperatures above 58°C, enabling visual detection of lesions formed from denatured albumin. PSNE were vaporized using a 30-cycle, 3.2-MHz, at an acoustic power of 6.4 W (free-field intensity of 4,586 W/cm2) pulse from a single-element, focused high-power transducer. The vaporization pulse was immediately followed by a 15-s continuous wave, 3.2-MHz signal to induce ultrasound-mediated heating. Control experiments were conducted using an identical procedure without the vaporization pulse. Lesion formation was detected by acquiring video frames during sonication and post-processing the images for analysis. Broadband emissions from inertial cavitation (IC) were passively detected with a focused, 2-MHz transducer. Temperature measurements were acquired using a needle thermocouple. Results Bubbles formed at the HIFU focus via PSNE vaporization enhanced HIFU-mediated heating. Broadband emissions detected during HIFU exposure coincided in time with measured accelerated heating, which suggested that IC played an important role in bubble-enhanced heating. In the presence of bubbles, the acoustic power required for the formation of a 9-mm3 lesion was reduced by 72% and the exposure time required for the onset of albumin denaturation was significantly reduced (by 4 s), provided that the PSNE volume fraction in the polyacrylamide gel was at least 0.008%. Conclusions The time or acoustic power required for lesion formation in gel phantoms was dramatically reduced by vaporizing PSNE into bubbles. These results suggest that PSNE may improve the efficiency of HIFU-mediated thermal ablation of solid tumors; thus, further investigation is warranted to determine whether bubble-enhanced HIFU may potentially become a viable option for cancer therapy. PMID:24761223
The impact of vaporized nanoemulsions on ultrasound-mediated ablation.
Zhang, Peng; Kopechek, Jonathan A; Porter, Tyrone M
2013-01-01
The clinical feasibility of using high-intensity focused ultrasound (HIFU) for ablation of solid tumors is limited by the high acoustic pressures and long treatment times required. The presence of microbubbles during sonication can increase the absorption of acoustic energy and accelerate heating. However, formation of microbubbles within the tumor tissue remains a challenge. Phase-shift nanoemulsions (PSNE) have been developed as a means for producing microbubbles within tumors. PSNE are emulsions of submicron-sized, lipid-coated, and liquid perfluorocarbon droplets that can be vaporized into microbubbles using short (<1 ms), high-amplitude (>5 MPa) acoustic pulses. In this study, the impact of vaporized phase-shift nanoemulsions on the time and acoustic power required for HIFU-mediated thermal lesion formation was investigated in vitro. PSNE containing dodecafluoropentane were produced with narrow size distributions and mean diameters below 200 nm using a combination of sonication and extrusion. PSNE was dispersed in albumin-containing polyacrylamide gel phantoms for experimental tests. Albumin denatures and becomes opaque at temperatures above 58°C, enabling visual detection of lesions formed from denatured albumin. PSNE were vaporized using a 30-cycle, 3.2-MHz, at an acoustic power of 6.4 W (free-field intensity of 4,586 W/cm(2)) pulse from a single-element, focused high-power transducer. The vaporization pulse was immediately followed by a 15-s continuous wave, 3.2-MHz signal to induce ultrasound-mediated heating. Control experiments were conducted using an identical procedure without the vaporization pulse. Lesion formation was detected by acquiring video frames during sonication and post-processing the images for analysis. Broadband emissions from inertial cavitation (IC) were passively detected with a focused, 2-MHz transducer. Temperature measurements were acquired using a needle thermocouple. Bubbles formed at the HIFU focus via PSNE vaporization enhanced HIFU-mediated heating. Broadband emissions detected during HIFU exposure coincided in time with measured accelerated heating, which suggested that IC played an important role in bubble-enhanced heating. In the presence of bubbles, the acoustic power required for the formation of a 9-mm(3) lesion was reduced by 72% and the exposure time required for the onset of albumin denaturation was significantly reduced (by 4 s), provided that the PSNE volume fraction in the polyacrylamide gel was at least 0.008%. The time or acoustic power required for lesion formation in gel phantoms was dramatically reduced by vaporizing PSNE into bubbles. These results suggest that PSNE may improve the efficiency of HIFU-mediated thermal ablation of solid tumors; thus, further investigation is warranted to determine whether bubble-enhanced HIFU may potentially become a viable option for cancer therapy.
Cache and energy efficient algorithms for Nussinov's RNA Folding.
Zhao, Chunchun; Sahni, Sartaj
2017-12-06
An RNA folding/RNA secondary structure prediction algorithm determines the non-nested/pseudoknot-free structure by maximizing the number of complementary base pairs and minimizing the energy. Several implementations of Nussinov's classical RNA folding algorithm have been proposed. Our focus is to obtain run time and energy efficiency by reducing the number of cache misses. Three cache-efficient algorithms, ByRow, ByRowSegment and ByBox, for Nussinov's RNA folding are developed. Using a simple LRU cache model, we show that the Classical algorithm of Nussinov has the highest number of cache misses followed by the algorithms Transpose (Li et al.), ByRow, ByRowSegment, and ByBox (in this order). Extensive experiments conducted on four computational platforms-Xeon E5, AMD Athlon 64 X2, Intel I7 and PowerPC A2-using two programming languages-C and Java-show that our cache efficient algorithms are also efficient in terms of run time and energy. Our benchmarking shows that, depending on the computational platform and programming language, either ByRow or ByBox give best run time and energy performance. The C version of these algorithms reduce run time by as much as 97.2% and energy consumption by as much as 88.8% relative to Classical and by as much as 56.3% and 57.8% relative to Transpose. The Java versions reduce run time by as much as 98.3% relative to Classical and by as much as 75.2% relative to Transpose. Transpose achieves run time and energy efficiency at the expense of memory as it takes twice the memory required by Classical. The memory required by ByRow, ByRowSegment, and ByBox is the same as that of Classical. As a result, using the same amount of memory, the algorithms proposed by us can solve problems up to 40% larger than those solvable by Transpose.
Information systems and human error in the lab.
Bissell, Michael G
2004-01-01
Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.
Vergence-accommodation conflicts hinder visual performance and cause visual fatigue.
Hoffman, David M; Girshick, Ahna R; Akeley, Kurt; Banks, Martin S
2008-03-28
Three-dimensional (3D) displays have become important for many applications including vision research, operation of remote devices, medical imaging, surgical training, scientific visualization, virtual prototyping, and more. In many of these applications, it is important for the graphic image to create a faithful impression of the 3D structure of the portrayed object or scene. Unfortunately, 3D displays often yield distortions in perceived 3D structure compared with the percepts of the real scenes the displays depict. A likely cause of such distortions is the fact that computer displays present images on one surface. Thus, focus cues-accommodation and blur in the retinal image-specify the depth of the display rather than the depths in the depicted scene. Additionally, the uncoupling of vergence and accommodation required by 3D displays frequently reduces one's ability to fuse the binocular stimulus and causes discomfort and fatigue for the viewer. We have developed a novel 3D display that presents focus cues that are correct or nearly correct for the depicted scene. We used this display to evaluate the influence of focus cues on perceptual distortions, fusion failures, and fatigue. We show that when focus cues are correct or nearly correct, (1) the time required to identify a stereoscopic stimulus is reduced, (2) stereoacuity in a time-limited task is increased, (3) distortions in perceived depth are reduced, and (4) viewer fatigue and discomfort are reduced. We discuss the implications of this work for vision research and the design and use of displays.
Highly Scalable Matching Pursuit Signal Decomposition Algorithm
NASA Technical Reports Server (NTRS)
Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.
2009-01-01
Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the standard algorithm. When the utmost accuracy must be achieved, the modified algorithm extracts atoms more conservatively but still exhibits computational gains over classical MPD. The MPD++ algorithm was demonstrated using an over-complete dictionary on real life data. Computational times were reduced by factors of 1.9 and 44 for the emphases of accuracy and performance, respectively. The modified algorithm extracted similar amounts of energy compared to classical MPD. The degree of the improvement in computational time depends on the complexity of the data, the initialization parameters, and the breadth of the dictionary. The results of the research confirm that the three modifications successfully improved the scalability and computational efficiency of the MPD algorithm. Correlation Thresholding decreased the time complexity by reducing the dictionary size. Multiple Atom Extraction also reduced the time complexity by decreasing the number of iterations required for a stopping criterion to be reached. The Course-Fine Grids technique enabled complicated atoms with numerous variable parameters to be effectively represented in the dictionary. Due to the nature of the three proposed modifications, they are capable of being stacked and have cumulative effects on the reduction of the time complexity.
Reducing Barriers To The Use of High-Efficiency Lighting Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peter Morante
2005-12-31
With funding from the U.S. Department of Energy (DOE), the Lighting Research Center (LRC) at Rensselaer Polytechnic Institute completed the four-year research project, Reducing Barriers to the Use of High-Efficiency Lighting Systems. The initial objectives were: (1) identifying barriers to widespread penetration of lighting controls in commercial/industrial (C/I) applications that employ fluorescent lamp technologies, and (2) making recommendations to overcome these barriers. The addition of a fourth year expanded the original project objectives to include an examination of the impact on fluorescent lamps from dimming utilizing different lamp electrode heating and dimming ratios. The scope of the project was narrowedmore » to identify barriers to the penetration of lighting controls into commercial-industrial (C/I) applications that employ fluorescent lamp technologies, and to recommend means for overcoming these barriers. Working with lighting manufacturers, specifiers, and installers, the project identified technological and marketing barriers to the widespread use of lighting controls, specifically automatic-off controls, occupancy sensors, photosensors, dimming systems, communication protocols and load-shedding ballasts. The primary barriers identified include cost effectiveness of lighting controls to the building owner, lack of standard communication protocols to allow different part of the control system to communicate effectively, and installation and commissioning issues. Overcoming the identified barriers requires lighting control products on the market to achieve three main goals: (1) Achieve sufficient functionality to meet the key requirements of their main market. (2) Allow significant cost reduction compared to current market standard systems. Cost should consider: hardware capital cost including wiring, design time required by the specifier and the control system manufacturer, installation time required by the electrician, and commissioning time and remedial time required by the electrician and end user. (3) Minimize ongoing perceived overhead costs and inconvenience to the end user, or in other words, systems should be simple to understand and use. In addition, we believe that no lighting controls solution is effective or acceptable unless it contributes to, or does not compromise, the following goals: (1) Productivity--Planning, installation, commissioning, maintenance, and use of controls should not decrease business productivity; (2) Energy savings--Lighting controls should save significant amounts of energy and money in relation to the expense involved in using them (acceptable payback period); and/or (3) Reduced power demand--Society as a whole should benefit from the lowered demand for expensive power and for more natural resources. Discussions of technology barriers and developments are insufficient by themselves to achieve higher penetration of lighting controls in the market place. Technology transfer efforts must play a key role in gaining market acceptance. The LRC developed a technology transfer model to better understand what actions are required and by whom to move any technology toward full market acceptance.« less
James Webb Space Telescope: Supporting Multiple Ground System Transitions in One Year
NASA Technical Reports Server (NTRS)
Detter, Ryan; Fatig, Curtis; Steck, Jane
2004-01-01
Ideas, requirements, and concepts developed during the very early phases of the mission design often conflict with the reality of a situation once the prime contractors are awarded. This happened for the James Webb Space Telescope (JWST) as well. The high level requirement of a common real-time ground system for both the Integration and Test (I&T), as well as the Operation phase of the mission is meant to reduce the cost and time needed later in the mission development for re-certification of databases, command and control systems, scripts, display pages, etc. In the case of JWST, the early Phase A flight software development needed a real-time ground system and database prior to the spacecraft prime contractor being selected. To compound the situation, the very low level requirements for the real-time ground system were not well defined. These two situations caused the initial real-time ground system to be switched out for a system that was previously used by the Bight software development team. To meet the high-!evel requirement, a third ground system was selected based on the prime spacecraft contractor needs and JWST Project decisions. The JWST ground system team has responded to each of these changes successfully. The lessons learned from each transition have not only made each transition smoother, but have also resolved issues earlier in the mission development than what would normally occur.
A generalized voter model with time-decaying memory on a multilayer network
NASA Astrophysics Data System (ADS)
Zhong, Li-Xin; Xu, Wen-Juan; Chen, Rong-Da; Zhong, Chen-Yang; Qiu, Tian; Shi, Yong-Dong; Wang, Li-Liang
2016-09-01
By incorporating a multilayer network and time-decaying memory into the original voter model, we investigate the coupled effects of spatial and temporal accumulation of peer pressure on the consensus. Heterogeneity in peer pressure and the time-decaying mechanism are both shown to be detrimental to the consensus. We find the transition points below which a consensus can always be reached and above which two opposed opinions are more likely to coexist. Our mean-field analysis indicates that the phase transitions in the present model are governed by the cumulative influence of peer pressure and the updating threshold. We find a functional relation between the consensus threshold and the decay rate of the influence of peer is found. As to the pressure. The time required to reach a consensus is governed by the coupling of the memory length and the decay rate. An intermediate decay rate may greatly reduce the time required to reach a consensus.
Process improvement by cycle time reduction through Lean Methodology
NASA Astrophysics Data System (ADS)
Siva, R.; patan, Mahamed naveed khan; lakshmi pavan kumar, Mane; Purusothaman, M.; pitchai, S. Antony; Jegathish, Y.
2017-05-01
In present world, every customer needs their products to get on time with good quality. Presently every industry is striving to satisfy their customer requirements. An aviation concern trying to accomplish continuous improvement in all its projects. In this project the maintenance service for the customer is analyzed. The maintenance part service is split up into four levels. Out of it, three levels are done in service shops and the fourth level falls under customer’s privilege to change the parts in their aircraft engines at their location. An enhancement for electronics initial provisioning (eIP) is done for fourth level. Customers request service shops to get their requirements through Recommended Spare Parts List (RSPL) by eIP. To complete this RSPL for one customer, it takes 61.5 hours as a cycle time which is very high. By mapping current state VSM and takt time, future state improvement can be done in order to reduce cycle time using Lean tools such as Poke-Yoke, Jidoka, 5S, Muda etc.,
Compressive light field imaging
NASA Astrophysics Data System (ADS)
Ashok, Amit; Neifeld, Mark A.
2010-04-01
Light field imagers such as the plenoptic and the integral imagers inherently measure projections of the four dimensional (4D) light field scalar function onto a two dimensional sensor and therefore, suffer from a spatial vs. angular resolution trade-off. Programmable light field imagers, proposed recently, overcome this spatioangular resolution trade-off and allow high-resolution capture of the (4D) light field function with multiple measurements at the cost of a longer exposure time. However, these light field imagers do not exploit the spatio-angular correlations inherent in the light fields of natural scenes and thus result in photon-inefficient measurements. Here, we describe two architectures for compressive light field imaging that require relatively few photon-efficient measurements to obtain a high-resolution estimate of the light field while reducing the overall exposure time. Our simulation study shows that, compressive light field imagers using the principal component (PC) measurement basis require four times fewer measurements and three times shorter exposure time compared to a conventional light field imager in order to achieve an equivalent light field reconstruction quality.
Low power pulsed MPD thruster system analysis and applications
NASA Astrophysics Data System (ADS)
Myers, Roger M.; Domonkos, Matthew; Gilland, James H.
1993-06-01
Pulsed MPD thruster systems were analyzed for application to solar-electric orbit transfer vehicles at power levels ranging from 10 to 40 kW. Potential system level benefits of pulsed propulsion technology include ease of power scaling without thruster performance changes, improved transportability from low power flight experiments to operational systems, and reduced ground qualification costs. Required pulsed propulsion system components include a pulsed applied-field MPD thruster, a pulse-forming network, a charge control unit, a cathode heater supply, and high speed valves. Mass estimates were obtained for each propulsion subsystem and spacecraft component. Results indicate that for payloads of 1000 and 2000 kg, pulsed MPD thrusters can reduce launch mass by between 1000 and 2500 kg relative to hydrogen arcjets, reducing launch vehicle class and launch cost. While the achievable mass savings depends on the trip time allowed for the mission, cases are shown in which the launch vehicle required for a mission is decreased from an Atlas IIAS to an Atlas I or Delta 7920.
DNA melting profiles from a matrix method.
Poland, Douglas
2004-02-05
In this article we give a new method for the calculation of DNA melting profiles. Based on the matrix formulation of the DNA partition function, the method relies for its efficiency on the fact that the required matrices are very sparse, essentially reducing matrix multiplication to vector multiplication and thus making the computer time required to treat a DNA molecule containing N base pairs proportional to N(2). A key ingredient in the method is the result that multiplication by the inverse matrix can also be reduced to vector multiplication. The task of calculating the melting profile for the entire genome is further reduced by treating regions of the molecule between helix-plateaus, thus breaking the molecule up into independent parts that can each be treated individually. The method is easily modified to incorporate changes in the assignment of statistical weights to the different structural features of DNA. We illustrate the method using the genome of Haemophilus influenzae. Copyright 2003 Wiley Periodicals, Inc.
Paramedir: A Tool for Programmable Performance Analysis
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Labarta, Jesus; Gimenez, Judit
2004-01-01
Performance analysis of parallel scientific applications is time consuming and requires great expertise in areas such as programming paradigms, system software, and computer hardware architectures. In this paper we describe a tool that facilitates the programmability of performance metric calculations thereby allowing the automation of the analysis and reducing the application development time. We demonstrate how the system can be used to capture knowledge and intuition acquired by advanced parallel programmers in order to be transferred to novice users.
Visual Analysis of Air Traffic Data
NASA Technical Reports Server (NTRS)
Albrecht, George Hans; Pang, Alex
2012-01-01
In this paper, we present visual analysis tools to help study the impact of policy changes on air traffic congestion. The tools support visualization of time-varying air traffic density over an area of interest using different time granularity. We use this visual analysis platform to investigate how changing the aircraft separation volume can reduce congestion while maintaining key safety requirements. The same platform can also be used as a decision aid for processing requests for unmanned aerial vehicle operations.
Parallel processing implementations of a contextual classifier for multispectral remote sensing data
NASA Technical Reports Server (NTRS)
Siegel, H. J.; Swain, P. H.; Smith, B. W.
1980-01-01
Contextual classifiers are being developed as a method to exploit the spatial/spectral context of a pixel to achieve accurate classification. Classification algorithms such as the contextual classifier typically require large amounts of computation time. One way to reduce the execution time of these tasks is through the use of parallelism. The applicability of the CDC flexible processor system and of a proposed multimicroprocessor system (PASM) for implementing contextual classifiers is examined.
Cannas, G; Fattoum, J; Boukhit, M; Thomas, X
2015-01-01
Blood transfusion requirement represents one of the most significant cost driver associated with acute myeloid leukemia (AML). Low-intensity treatments (low-dose cytarabine, hypomethylating agents) have the potential to reduce transfusion dependence, and improve health-related quality of life. We assessed the cost-effectiveness of treatment types regarding blood product transfusions in a cohort of 214 AML patients aged ≥ 70 years. Analyzes did not indicate any significant overall survival (OS) advantage of intensive chemotherapy comparatively to low-intensity treatment. The difference was significant when compared to best supportive care (BSC) (P<0.0001). Blood products transfusion cost per patient was 1.3 times lower with low-intensity therapy and 2.7 times lower with BSC than with intensive chemotherapy. Mean transfusion cost per patient according to OS varied from 2.4 to 1.3 times less with low-intensity treatment comparatively to intensive chemotherapy for patients having OS ≤ 13.3 months. Costs varied from 3.5 to 2.6 times less with BSC comparatively to intensive chemotherapy. In contrast, mean transfusion costs were comparable among treatments for patients with OS>13.3 months. Low-intensity treatments represent a cost-effective alternative to BSC and require a reduced number of transfused blood products comparatively to intensive chemotherapy, while OS was not significantly different. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
[The work of medical doctors on psychiatric wards: an analysis of everyday activities].
Putzhammer, A; Senft, I; Fleischmann, H; Klein, H E; Schmauss, M; Schreiber, W; Hajak, G
2006-03-01
In Germany, the economic situation of psychiatric hospitals has markedly changed during the last years. Whilst the number of patients has steadily increased, many clinics considerably reduced the number of therapeutic staff due to an increasing lack of financial support. The German psychiatry personnel regulations act defines the number of therapeutic staff required for an adequate psychiatric treatment, but the requirements of this regulations act nowadays are widely missed in most of the German psychiatric hospitals. This severely affects the therapeutic work on psychiatric wards. This study analyses tasks and activities of medical doctors on psychiatric wards and compares the hours spent with various types of activities with the amount of time that should be spent according to the personnel regulations act. Results show that doctors spend much more time with documentation and administrative work than originally intended by the personnel regulations act. They compensate this mainly by a reduction of time spent in direct contact with the patients. In this context, the number of psychotherapy sessions as well as sessions with the patients' relatives has been considerably reduced, whereas the time spent for emergency intervention and basic treatment still corresponds to the calculations according to the personnel regulations act. All in all, the results show that a reduction of therapeutic staff in psychiatric hospitals directly leads to a change in treatment settings with a focus on less individual treatment options.
[Cost-benefit analysis of practical occupational medicine service].
Kentner, M
1996-02-01
Cost problems in business, industry and government service force everyone to probe into the economy of traditional patterns of work and procedures. Occupational medicine is no exception. However, there has been a lack of criteria for assessing the economic aspects of occupational medicine. We are therefore suggesting an approach. Caring for the "human capital" factor is a cornerstone of free socioeconomy. Workers should not only be suitably qualified for their job but there must be the smallest possible minimum of absenteeism. Occupational medicine can do something to positively influence the following factors: by preventing incapacity to work by preventing job accidents and professional diseases by reducing the time required to cover distances between or within workflow phases or stages by reducing waste of time by waiting. Model calculations, based on highly plausible basic postulates, show that fully integrated occupational medical services are throughout economic and cost-saving. Using a concrete example, we arrived at a cost/benefit ratio of 1:2 while confining ourselves to benefits attainable within a relatively short time. We ignored other, future benefits requiring certain preventive measures, as well as other parameters that are difficult to assess, such as corporate identity. At present occupational medicine faces a certain amount of identity crisis which should not be counteracted by pointing to legislation that justifies its existence, but rather by proving that it is indeed highly economical because it saves time and money.
A Kernel-based Lagrangian method for imperfectly-mixed chemical reactions
NASA Astrophysics Data System (ADS)
Schmidt, Michael J.; Pankavich, Stephen; Benson, David A.
2017-05-01
Current Lagrangian (particle-tracking) algorithms used to simulate diffusion-reaction equations must employ a certain number of particles to properly emulate the system dynamics-particularly for imperfectly-mixed systems. The number of particles is tied to the statistics of the initial concentration fields of the system at hand. Systems with shorter-range correlation and/or smaller concentration variance require more particles, potentially limiting the computational feasibility of the method. For the well-known problem of bimolecular reaction, we show that using kernel-based, rather than Dirac delta, particles can significantly reduce the required number of particles. We derive the fixed width of a Gaussian kernel for a given reduced number of particles that analytically eliminates the error between kernel and Dirac solutions at any specified time. We also show how to solve for the fixed kernel size by minimizing the squared differences between solutions over any given time interval. Numerical results show that the width of the kernel should be kept below about 12% of the domain size, and that the analytic equations used to derive kernel width suffer significantly from the neglect of higher-order moments. The simulations with a kernel width given by least squares minimization perform better than those made to match at one specific time. A heuristic time-variable kernel size, based on the previous results, performs on par with the least squares fixed kernel size.
Measured soil water evaporation as a function of the square root of time and reference ET
USDA-ARS?s Scientific Manuscript database
Sunflower (Helianthus annuus L.) is a drought-adapted crop with a short growing season that reduces irrigation requirements and makes it ideal for regions with limited irrigation water supplies. Our objectives were a) to evaluate the yield potential of sunflower under deficit irrigation and b) det...
Maintaining cultures of wood-rotting fungi.
E.E. Nelson; H.A. Fay
1985-01-01
Phellinus weirii cultures were stored successfully for 10 years in small alder (Alnus rubra Bong.) disks at 2 °C. The six isolates tested appeared morphologically identical and after 10 years varied little in growth rate from those stored on malt agar slants. Long-term storage on alder disks reduces the time required for...
NASA Technical Reports Server (NTRS)
1971-01-01
Developed methodologies and procedures for the reduction of microbial burden on an assembled spacecraft at the time of encapsulation or terminal sterilization are reported. This technology is required for reducing excessive microbial burden on spacecraft components for the purposes of either decreasing planetary contamination probabilities for an orbiter or minimizing the duration of a sterilization process for a lander.
An Authoring System for Creating Computer-Based Role-Performance Trainers.
ERIC Educational Resources Information Center
Guralnick, David; Kass, Alex
This paper describes a multimedia authoring system called MOPed-II. Like other authoring systems, MOPed-II reduces the time and expense of producing end-user applications by eliminating much of the programming effort they require. However, MOPed-II reflects an approach to authoring tools for educational multimedia which is different from most…
Transforming Traditional Lectures into Problem-Based Blended Learning: Challenges and Experiences
ERIC Educational Resources Information Center
Dalsgaard, Christian; Godsk, Mikkel
2007-01-01
This paper presents our experiences and the challenges identified in transforming traditional lecture-based modules at a university into problem-based blended learning within a social constructivist approach. Our experiment was, among other factors, motivated by an urgent need to meet new curriculum requirements by reducing the lecturing time in a…
Community College Pathways: A Descriptive Report of Summative Assessments and Student Learning
ERIC Educational Resources Information Center
Strother, Scott; Sowers, Nicole
2014-01-01
Carnegie's Community College Pathways (CCP) offers two pathways, Statway® and Quantway®, that reduce the amount of time required to complete developmental mathematics and earn college-level mathematics credit. The Pathways aim to improve student success in mathematics while maintaining rigorous content, pedagogy, and learning outcomes. It is…
Leslie C. Parks; David O. Wallin; Samuel A. Cushman; Brad H. McRae
2015-01-01
Habitat fragmentation and habitat loss diminish population connectivity, reducing genetic diversity and increasing extinction risk over time. Improving connectivity is widely recommended to preserve the long-term viability of populations, but this requires accurate knowledge of how landscapes influence connectivity. Detectability of landscape effects on gene...
Design for Review - Applying Lessons Learned to Improve the FPGA Review Process
NASA Technical Reports Server (NTRS)
Figueiredo, Marco A.; Li, Kenneth E.
2014-01-01
Flight Field Programmable Gate Array (FPGA) designs are required to be independently reviewed. This paper provides recommendations to Flight FPGA designers to properly prepare their designs for review in order to facilitate the review process, and reduce the impact of the review time in the overall project schedule.
Inferring landscape effects on gene flow: A new model selection framework
A. J. Shirk; D. O. Wallin; S. A. Cushman; C. G. Rice; K. I. Warheit
2010-01-01
Populations in fragmented landscapes experience reduced gene flow, lose genetic diversity over time and ultimately face greater extinction risk. Improving connectivity in fragmented landscapes is now a major focus of conservation biology. Designing effective wildlife corridors for this purpose, however, requires an accurate understanding of how landscapes shape gene...
Point Cloud-Based Automatic Assessment of 3D Computer Animation Courseworks
ERIC Educational Resources Information Center
Paravati, Gianluca; Lamberti, Fabrizio; Gatteschi, Valentina; Demartini, Claudio; Montuschi, Paolo
2017-01-01
Computer-supported assessment tools can bring significant benefits to both students and teachers. When integrated in traditional education workflows, they may help to reduce the time required to perform the evaluation and consolidate the perception of fairness of the overall process. When integrated within on-line intelligent tutoring systems,…
Workplace Literacy for World Class Manufacturing. Final Report.
ERIC Educational Resources Information Center
Dowling, William D.; And Others
The Ohio State University, Inland Fisher Guide Division of General Motors, and United Auto Workers Local 969 formed a collaborative partnership in 1990 to train employees whose inadequate literacy skills made them unable to respond to the requirements of "synchronous manufacturing" (or "just in time" production). One of the goals is to reduce the…
77 FR 809 - Notice of Lodging of a Consent Decree Under the Clean Water Act
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-06
... terms and conditions of National Pollution Discharge Elimination System permits that Indiana issued to...,'' during wet weather events, and some dry weather time periods, into ``waters of the United States'' and ``waters of the state.'' The proposed Consent Decree would require South Bend to reduce its combined sewer...
76 FR 56223 - Notice of Lodging of a Consent Decree Under the Clean Water Act
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-12
... terms and conditions of National Pollution Discharge Elimination System permits that Indiana issued to... wet weather events, and some dry weather time periods, into ``waters of the United States'' and ``waters of the state.'' The proposed Consent Decree would require Elkhart to reduce its combined sewer...
Traveling wire electrode increases productivity of Electrical Discharge Machining /EDM/ equipment
NASA Technical Reports Server (NTRS)
Kotora, J., Jr.; Smith, S. V.
1967-01-01
Traveling wire electrode on electrical discharge machining /EDM/ equipment reduces the time requirements for precision cutting. This device enables cutting with a minimum of lost material and without inducing stress beyond that inherent in the material. The use of wire increases accuracy and enables tighter tolerances to be maintained.
47 CFR 15.709 - General technical requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... to the transmit antenna. If transmitting antennas of directional gain greater than 6 dBi are used, the maximum conducted output power shall be reduced by the amount in dB that the directional gain of... 100 kHz band during any time interval of continuous transmission: (i) Fixed devices: 12.2 dBm. (ii...
Investing in the Future: Addressing Work/Life Issues of Employees.
ERIC Educational Resources Information Center
Kutilek, Linda M.; Conklin, Nikki L.; Gunderson, Gail
2002-01-01
A national survey of Extension employees identified the most critical work/life challenges as a heavy workload, evening and weekend commitments, and lack of control or job autonomy. Only 40% were aware of benefits and programs offered concerning work/life balance. Recommendations included reducing the workload and time requirements of county-based…
Dynamics of BMP signaling in limb bud mesenchyme and polydactyly.
Norrie, Jacqueline L; Lewandowski, Jordan P; Bouldin, Cortney M; Amarnath, Smita; Li, Qiang; Vokes, Martha S; Ehrlich, Lauren I R; Harfe, Brian D; Vokes, Steven A
2014-09-15
Mutations in the Bone Morphogenetic Protein (BMP) pathway are associated with a range of defects in skeletal formation. Genetic analysis of BMP signaling requirements is complicated by the presence of three partially redundant BMPs that are required for multiple stages of limb development. We generated an inducible allele of a BMP inhibitor, Gremlin, which reduces BMP signaling. We show that BMPs act in a dose and time dependent manner in which early reduction of BMPs result in digit loss, while inhibiting overall BMP signaling between E10.5 and E11.5 allows polydactylous digit formation. During this period, inhibiting BMPs extends the duration of FGF signaling. Sox9 is initially expressed in normal digit ray domains but at reduced levels that correlate with the reduction in BMP signaling. The persistence of elevated FGF signaling likely promotes cell proliferation and survival, inhibiting the activation of Sox9 and secondarily, inhibiting the differentiation of Sox9-expressing chondrocytes. Our results provide new insights into the timing and clarify the mechanisms underlying BMP signaling during digit morphogenesis. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, Wei; Ding, Hongye; Sui, Zhenghong; Wang, Zhongxia; Wang, Jinguo
2014-05-01
The red alga Gracilariopsis lemaneiformis (Bory) is an economically valuable macroalgae. As a means to identify the sex of immature Gracilariopsis lemaneiformis, the amplified fragment length polymorphism (AFLP) technique was used to search for possible sex- or phase-related markers in male gametophytes, female gametophytes, and tetrasporophytes, respectively. Seven AFLP selective amplification primers were used in this study. The primer combination E-TG/M-CCA detected a specific band linked to male gametophytes. The DNA fragment was recovered and a 402-bp fragment was sequenced. However, no DNA sequence match was found in public databases. Sequence characterized amplified region (SCAR) primers were designed from the sequence to test the repeatability of the relationship to the sex, using 69 male gametophytes, 139 female gametophytes, and 47 tetrasporophytes. The test results demonstrate a good linkage and repeatability of the SCAR marker to sex. The SCAR primers developed in this study could reduce the time required for sex identification of Gracilariopsis lemaneiformis by four to six months. This can reduce both the time investment and number of specimens required in breeding experiments.
Clinical image processing engine
NASA Astrophysics Data System (ADS)
Han, Wei; Yao, Jianhua; Chen, Jeremy; Summers, Ronald
2009-02-01
Our group provides clinical image processing services to various institutes at NIH. We develop or adapt image processing programs for a variety of applications. However, each program requires a human operator to select a specific set of images and execute the program, as well as store the results appropriately for later use. To improve efficiency, we design a parallelized clinical image processing engine (CIPE) to streamline and parallelize our service. The engine takes DICOM images from a PACS server, sorts and distributes the images to different applications, multithreads the execution of applications, and collects results from the applications. The engine consists of four modules: a listener, a router, a job manager and a data manager. A template filter in XML format is defined to specify the image specification for each application. A MySQL database is created to store and manage the incoming DICOM images and application results. The engine achieves two important goals: reduce the amount of time and manpower required to process medical images, and reduce the turnaround time for responding. We tested our engine on three different applications with 12 datasets and demonstrated that the engine improved the efficiency dramatically.
Honeybee Colony Vibrational Measurements to Highlight the Brood Cycle
Bencsik, Martin; Le Conte, Yves; Reyes, Maritza; Pioz, Maryline; Whittaker, David; Crauser, Didier; Simon Delso, Noa; Newton, Michael I.
2015-01-01
Insect pollination is of great importance to crop production worldwide and honey bees are amongst its chief facilitators. Because of the decline of managed colonies, the use of sensor technology is growing in popularity and it is of interest to develop new methods which can more accurately and less invasively assess honey bee colony status. Our approach is to use accelerometers to measure vibrations in order to provide information on colony activity and development. The accelerometers provide amplitude and frequency information which is recorded every three minutes and analysed for night time only. Vibrational data were validated by comparison to visual inspection data, particularly the brood development. We show a strong correlation between vibrational amplitude data and the brood cycle in the vicinity of the sensor. We have further explored the minimum data that is required, when frequency information is also included, to accurately predict the current point in the brood cycle. Such a technique should enable beekeepers to reduce the frequency with which visual inspections are required, reducing the stress this places on the colony and saving the beekeeper time. PMID:26580393
Single incision laparoscopic surgery for appendicectomy: a retrospective comparative analysis.
Chow, Andre; Purkayastha, Sanjay; Nehme, Jean; Darzi, Lord Ara; Paraskeva, Paraskevas
2010-10-01
Single incision laparoscopic surgery (SILS) may further reduce the trauma of surgery leading to reduced port site complications and postoperative pain. The improved cosmetic result also may lead to improved patient satisfaction with surgery. Data were prospectively collected and retrospectively analyzed for all patients who underwent SILS appendicectomy at our institution and were compared with those who had undergone conventional laparoscopic appendicectomy during the same time period. This included patient demographic data, intraoperative, and postoperative outcomes. Thirty-three patients underwent conventional laparoscopic appendicectomy and 40 patients underwent SILS appendicectomy between January 26, 2008 and July 14, 2009. Operative time was shorter with SILS appendicectomy compared with conventional laparoscopic appendicectomy (p < 0.05). No patients in the SILS appendicectomy group required conversion to open surgery compared with two patients in the conventional laparoscopic appendicectomy group. Patients stayed an average of 1.36 days after SILS appendicectomy, and 2.36 days after conventional laparoscopic appendicectomy. SILS appendicectomy seems to be a safe and efficacious technique. Further work in the form of randomized studies is required to investigate any significant advantages of this new and attractive technique.
Local sharpening and subspace wavefront correction with predictive dynamic digital holography
NASA Astrophysics Data System (ADS)
Sulaiman, Sennan; Gibson, Steve
2017-09-01
Digital holography holds several advantages over conventional imaging and wavefront sensing, chief among these being significantly fewer and simpler optical components and the retrieval of complex field. Consequently, many imaging and sensing applications including microscopy and optical tweezing have turned to using digital holography. A significant obstacle for digital holography in real-time applications, such as wavefront sensing for high energy laser systems and high speed imaging for target racking, is the fact that digital holography is computationally intensive; it requires iterative virtual wavefront propagation and hill-climbing to optimize some sharpness criteria. It has been shown recently that minimum-variance wavefront prediction can be integrated with digital holography and image sharpening to reduce significantly large number of costly sharpening iterations required to achieve near-optimal wavefront correction. This paper demonstrates further gains in computational efficiency with localized sharpening in conjunction with predictive dynamic digital holography for real-time applications. The method optimizes sharpness of local regions in a detector plane by parallel independent wavefront correction on reduced-dimension subspaces of the complex field in a spectral plane.
A Reverse Localization Scheme for Underwater Acoustic Sensor Networks
Moradi, Marjan; Rezazadeh, Javad; Ismail, Abdul Samad
2012-01-01
Underwater Wireless Sensor Networks (UWSNs) provide new opportunities to observe and predict the behavior of aquatic environments. In some applications like target tracking or disaster prevention, sensed data is meaningless without location information. In this paper, we propose a novel 3D centralized, localization scheme for mobile underwater wireless sensor network, named Reverse Localization Scheme or RLS in short. RLS is an event-driven localization method triggered by detector sensors for launching localization process. RLS is suitable for surveillance applications that require very fast reactions to events and could report the location of the occurrence. In this method, mobile sensor nodes report the event toward the surface anchors as soon as they detect it. They do not require waiting to receive location information from anchors. Simulation results confirm that the proposed scheme improves the energy efficiency and reduces significantly localization response time with a proper level of accuracy in terms of mobility model of water currents. Major contributions of this method lie on reducing the numbers of message exchange for localization, saving the energy and decreasing the average localization response time. PMID:22666034
A reverse localization scheme for underwater acoustic sensor networks.
Moradi, Marjan; Rezazadeh, Javad; Ismail, Abdul Samad
2012-01-01
Underwater Wireless Sensor Networks (UWSNs) provide new opportunities to observe and predict the behavior of aquatic environments. In some applications like target tracking or disaster prevention, sensed data is meaningless without location information. In this paper, we propose a novel 3D centralized, localization scheme for mobile underwater wireless sensor network, named Reverse Localization Scheme or RLS in short. RLS is an event-driven localization method triggered by detector sensors for launching localization process. RLS is suitable for surveillance applications that require very fast reactions to events and could report the location of the occurrence. In this method, mobile sensor nodes report the event toward the surface anchors as soon as they detect it. They do not require waiting to receive location information from anchors. Simulation results confirm that the proposed scheme improves the energy efficiency and reduces significantly localization response time with a proper level of accuracy in terms of mobility model of water currents. Major contributions of this method lie on reducing the numbers of message exchange for localization, saving the energy and decreasing the average localization response time.
NASA Aircraft Vortex Spacing System Development Status
NASA Technical Reports Server (NTRS)
Hinton, David A.; Charnock, James K.; Bagwell, Donald R.; Grigsby, Donner
1999-01-01
The National Aeronautics and Space Administration (NASA) is addressing airport capacity enhancements during instrument meteorological conditions through the Terminal Area Productivity (TAP) program. Within TAP, the Reduced Spacing Operations (RSO) subelement at the NASA Langley Research Center is developing an Aircraft VOrtex Spacing System (AVOSS). AVOSS will integrate the output of several systems to produce weather dependent, dynamic wake vortex spacing criteria. These systems provide current and predicted weather conditions, models of wake vortex transport and decay in these weather conditions, and real-time feedback of wake vortex behavior from sensors. The goal of the NASA program is to provide the research and development to demonstrate an engineering model AVOSS in real-time operation at a major airport. The demonstration is only of concept feasibility, and additional effort is required to deploy an operational system for actual aircraft spacing reduction. This paper describes the AVOSS system architecture, a wake vortex facility established at the Dallas-Fort Worth International Airport (DFW), initial operational experience with the AVOSS system, and emerging considerations for subsystem requirements. Results of the initial system operation suggest a significant potential for reduced spacing.
The minicell TMirradiator: A new system for a new market
NASA Astrophysics Data System (ADS)
Clouser, James F.; Beers, Eric W.
1998-06-01
Since the commissioning of the first industrial Gamma Irradiator design, designers and operators of irradiation systems have been attempting to meet the specific production requirements and challenges presented to them. This objective has resulted in many different versions of irradiators currently in service today, all of which had original charters and many of which still perform very well within even the new requirements of this industry. Continuing changes in the marketplace have, however, placed pressures on existing designs due to a combination of changing dose requirements for sterlization, increased economic pressures from the specific industry served for both time and location and the increasing variety of product types requiring processing. Additionally, certain market areas which could never economically support a typical gamma processing facility have either not been serviced, or have forced potential gamma users to transport product long distances to one of these existing facilities. The MiniCell TM removes many of the traditional barriers previously accepted in the radiation processing industry for building a processing facility in a location. Its reduced size and cost have allowed many potential users to consider in-house processing and its ability to be quickly assembled allow it to meet market needs in a much more timely fashion than the previous designs. The MiniCell system can cost effectively meet many of the current market needs of reducing total cost of processing and also be flexible enough to process product in a wide range of industries effectively.
Comparison of algorithms to generate event times conditional on time-dependent covariates.
Sylvestre, Marie-Pierre; Abrahamowicz, Michal
2008-06-30
The Cox proportional hazards model with time-dependent covariates (TDC) is now a part of the standard statistical analysis toolbox in medical research. As new methods involving more complex modeling of time-dependent variables are developed, simulations could often be used to systematically assess the performance of these models. Yet, generating event times conditional on TDC requires well-designed and efficient algorithms. We compare two classes of such algorithms: permutational algorithms (PAs) and algorithms based on a binomial model. We also propose a modification of the PA to incorporate a rejection sampler. We performed a simulation study to assess the accuracy, stability, and speed of these algorithms in several scenarios. Both classes of algorithms generated data sets that, once analyzed, provided virtually unbiased estimates with comparable variances. In terms of computational efficiency, the PA with the rejection sampler reduced the time necessary to generate data by more than 50 per cent relative to alternative methods. The PAs also allowed more flexibility in the specification of the marginal distributions of event times and required less calibration.
Recent Cycle Time Reduction at Langley Research Center
NASA Technical Reports Server (NTRS)
Kegelman, Jerome T.
2000-01-01
The NASA Langley Research Center (LaRC) has been engaged in an effort to reduce wind tunnel test cycle time in support of Agency goals and to satisfy the wind tunnel testing needs of the commercial and military aerospace communities. LaRC has established the Wind Tunnel Enterprise (WTE), with goals of reducing wind tunnel test cycle time by an order of magnitude by 2002, and by two orders of magnitude by 2010. The WTE also plans to meet customer expectations for schedule integrity, as well as data accuracy and quality assurance. The WTE has made progress towards these goals over the last year with a focused effort on technological developments balanced by attention to process improvements. This paper presents a summary of several of the WTE activities over the last year that are related to test cycle time reductions at the Center. Reducing wind tunnel test cycle time, defined here as the time between the freezing of loft lines and delivery of test data, requires that the relationship between high productivity and data quality assurance be considered. The efforts have focused on all of the drivers for test cycle time reduction, including process centered improvements, facility upgrades, technological improvements to enhance facility readiness and productivity, as well as advanced measurement techniques. The application of internet tools and computer modeling of facilities to allow a virtual presence of the customer team is also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiller, Mauritius M.; Veinot, Kenneth G.; Easterly, Clay E.
In this study, methods are addressed to reduce the computational time to compute organ-dose rate coefficients using Monte Carlo techniques. Several variance reduction techniques are compared including the reciprocity method, importance sampling, weight windows and the use of the ADVANTG software package. For low-energy photons, the runtime was reduced by a factor of 10 5 when using the reciprocity method for kerma computation for immersion of a phantom in contaminated water. This is particularly significant since impractically long simulation times are required to achieve reasonable statistical uncertainties in organ dose for low-energy photons in this source medium and geometry. Althoughmore » the MCNP Monte Carlo code is used in this paper, the reciprocity technique can be used equally well with other Monte Carlo codes.« less
Tatoulis, Triantafyllos; Akratos, Christos S; Tekerlekopoulou, Athanasia G; Vayenas, Dimitrios V; Stefanakis, Alexandros I
2017-11-01
The use of Constructed Wetlands (CWs) has been nowadays expanded from municipal to industrial and agro-industrial wastewaters. The main limitations of CWs remain the relatively high area requirements compared to mechanical treatment technologies and the potential occurrence of the clogging phenomenon. This study presents the findings of an innovative CW design where novel materials were used. Four pilot-scale CW units were designed, built and operated for two years. Each unit consisted of two compartments, the first of which (two thirds of the total unit length) contained either fine gravel (in two units) or random type high density polyethylene (HDPE) (in the other two units). This plastic media type was tested in a CW system for the first time. The second compartment of all four units contained natural zeolite. Two units (one with fine gravel and one with HDPE) were planted with common reeds, while the other two were kept unplanted. Second cheese whey was introduced into the units, which were operated under hydraulic residence times (HRT) of 2 and 4 days. After a two-year operation and monitoring period, pollutant removal rates were approximately 80%, 75% and 90% for COD, ammonium and ortho-phosphate, respectively, while temperature and HRT had no significant effect on pollutant removal. CWs containing the plastic media achieved the same removal rates as those containing gravel, despite receiving three times higher hydraulic surface loads (0.08 m/d) and four times higher organic surface loads (620 g/m 2 /d). This reveals that the use of HDPE plastic media could reduce CW surface area requirements by 75%. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wang, Shuai; Hu, Shan-Hu; Shi, Yi; Li, Bao-Ming
2017-03-01
It has been shown that the anterior cingulate cortex (ACC) and its dopamine system are crucial for decision making that requires physical/emotional effort, but not for all forms of cost-benefit decision making. Previous studies had mostly employed behavioral tasks with two competing cost-reward options that were preset by the experimenters. However, few studies have been conducted using scenarios in which the subjects have full control over the energy/time expenditure required to obtain a proportional reward. Here, we assessed the roles of the ACC and its dopamine system in cost-benefit decision making by utilizing a "do more get more" (DMGM) task and a time-reward trade-off (TRTO) task, wherein the animals were able to self-determine how much effort or time to expend at a nosepoke operandum for a proportional reward. Our results showed that (1) ACC inactivation severely impaired DMGM task performance, with a reduction in the rate of correct responses and a decrease in the effort expended, but did not affect the TRTO task; and (2) blocking ACC D2 receptors had no impact on DMGM task performance in the baseline cost-benefit scenario, but it significantly reduced the attempts to invest increased effort for a large reward when the benefit-cost ratio was reduced by half. In contrast, blocking ACC D1 receptors had no effect on DMGM task performance. These findings suggest that the ACC is required for self-paced effort-based but not for time-reward trade-off decision making. Furthermore, ACC dopamine D2 but not D1 receptors are involved in DMGM decision making.
C-phycocyanin extraction assisted by pulsed electric field from Artrosphira platensis.
Martínez, Juan Manuel; Luengo, Elisa; Saldaña, Guillermo; Álvarez, Ignacio; Raso, Javier
2017-09-01
This paper assesses the application of pulsed electric fields (PEF) to the fresh biomass of Artrhospira platensis in order to enhance the extraction of C-phycocyanin into aqueous media. Electroporation of A. platensis depended on both electric field strength and treatment duration. The minimum electric field intensity for detecting C-phycocyanin in the extraction medium was 15kV/cm after the application of a treatment time 150μs (50 pulses of 3μs). However higher electric field strength were required when shorter treatment times were applied. Response surface methodology was used in order to investigate the influence of electric field strength (15-25kV/cm), treatment time (60-150μs), and temperature of application of PEF (10-40°C) on C-phycocyanin extraction yield (PEY). The increment of the temperature PEF treatment reduced the electric field strength and the treatment time required to obtain a given PEY and, consequently decreased the total specific energy delivered by the treatment. For example, the increment of temperature from 10°C to 40°C permitted to reduce the electric field strength required to extract 100mg/g d w of C-phycocyanin from 25 to 18kV/cm, and the specific energy input from 106.7 to 67.5kJ/Kg. Results obtained in this investigation demonstrated PEF's potential for selectively extraction C-phycocyanin from fresh A. platensis biomass. The purity of the C-phycocyanin extract obtained from the electroporated cells was higher than that obtained using other techniques based on the cell complete destruction. Copyright © 2016 Elsevier Ltd. All rights reserved.
A workstation-based evaluation of a far-field route planner for helicopters
NASA Technical Reports Server (NTRS)
Warner, David N., Jr.; Moran, Francis J.
1991-01-01
Helicopter flight missions at very low, nap of the Earth, altitudes place a heavy workload on the pilot. To aid in reducing this workload, Ames Research Center has been investigating various types of automated route planners. As part of an automated preflight mission planner, a route planner algorithm aids in selecting the overall (far-field) route to be flown. During the mission, the route planner can be used to replan a new route in case of unexpected threats or change in mission requirements. An evaluation of a candidate route planning algorithm, based on dynamic programming techniques is described. This algorithm meets most of the requirements for route planning, both preflight and during the mission. In general, the requirements are to minimize the distance and/or fuel and the deviation from a flight time schedule, and must be flyable within the constraints of available fuel and time.
Achieving reutilization of scheduling software through abstraction and generalization
NASA Technical Reports Server (NTRS)
Wilkinson, George J.; Monteleone, Richard A.; Weinstein, Stuart M.; Mohler, Michael G.; Zoch, David R.; Tong, G. Michael
1995-01-01
Reutilization of software is a difficult goal to achieve particularly in complex environments that require advanced software systems. The Request-Oriented Scheduling Engine (ROSE) was developed to create a reusable scheduling system for the diverse scheduling needs of the National Aeronautics and Space Administration (NASA). ROSE is a data-driven scheduler that accepts inputs such as user activities, available resources, timing contraints, and user-defined events, and then produces a conflict-free schedule. To support reutilization, ROSE is designed to be flexible, extensible, and portable. With these design features, applying ROSE to a new scheduling application does not require changing the core scheduling engine, even if the new application requires significantly larger or smaller data sets, customized scheduling algorithms, or software portability. This paper includes a ROSE scheduling system description emphasizing its general-purpose features, reutilization techniques, and tasks for which ROSE reuse provided a low-risk solution with significant cost savings and reduced software development time.