Sample records for log-structured file systems

  1. Storage of sparse files using parallel log-structured file system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    A sparse file is stored without holes by storing a data portion of the sparse file using a parallel log-structured file system; and generating an index entry for the data portion, the index entry comprising a logical offset, physical offset and length of the data portion. The holes can be restored to the sparse file upon a reading of the sparse file. The data portion can be stored at a logical end of the sparse file. Additional storage efficiency can optionally be achieved by (i) detecting a write pattern for a plurality of the data portions and generating a singlemore » patterned index entry for the plurality of the patterned data portions; and/or (ii) storing the patterned index entries for a plurality of the sparse files in a single directory, wherein each entry in the single directory comprises an identifier of a corresponding sparse file.« less

  2. Parallel checksumming of data chunks of a shared data object using a log-structured file system

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-09-06

    Checksum values are generated and used to verify the data integrity. A client executing in a parallel computing system stores a data chunk to a shared data object on a storage node in the parallel computing system. The client determines a checksum value for the data chunk; and provides the checksum value with the data chunk to the storage node that stores the shared object. The data chunk can be stored on the storage node with the corresponding checksum value as part of the shared object. The storage node may be part of a Parallel Log-Structured File System (PLFS), and the client may comprise, for example, a Log-Structured File System client on a compute node or burst buffer. The checksum value can be evaluated when the data chunk is read from the storage node to verify the integrity of the data that is read.

  3. Comparing Web and Touch Screen Transaction Log Files

    PubMed Central

    Huntington, Paul; Williams, Peter

    2001-01-01

    Background Digital health information is available on a wide variety of platforms including PC-access of the Internet, Wireless Application Protocol phones, CD-ROMs, and touch screen public kiosks. All these platforms record details of user sessions in transaction log files, and there is a growing body of research into the evaluation of this data. However, there is very little research that has examined the problems of comparing the transaction log files of kiosks and the Internet. Objectives To provide a first step towards examining the problems of comparing the transaction log files of kiosks and the Internet. Methods We studied two platforms: touch screen kiosks and a comparable Web site. For both of these platforms, we examined the menu structure (which affects transaction log file data), the log-file structure, and the metrics derived from log-file records. Results We found substantial differences between the generated metrics. Conclusions None of the metrics discussed can be regarded as an effective way of comparing the use of kiosks and Web sites. Two metrics stand out as potentially comparable and valuable: the number of user sessions per hour and user penetration of pages. PMID:11720960

  4. Log-less metadata management on metadata server for parallel file systems.

    PubMed

    Liao, Jianwei; Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

  5. Log-Less Metadata Management on Metadata Server for Parallel File Systems

    PubMed Central

    Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally. PMID:24892093

  6. SU-E-T-142: Automatic Linac Log File: Analysis and Reporting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gainey, M; Rothe, T

    Purpose: End to end QA for IMRT/VMAT is time consuming. Automated linac log file analysis and recalculation of daily recorded fluence, and hence dose, distribution bring this closer. Methods: Matlab (R2014b, Mathworks) software was written to read in and analyse IMRT/VMAT trajectory log files (TrueBeam 1.5, Varian Medical Systems) overnight, and are archived on a backed-up network drive (figure). A summary report (PDF) is sent by email to the duty linac physicist. A structured summary report (PDF) for each patient is automatically updated for embedding into the R&V system (Mosaiq 2.5, Elekta AG). The report contains cross-referenced hyperlinks to easemore » navigation between treatment fractions. Gamma analysis can be performed on planned (DICOM RTPlan) and treated (trajectory log) fluence distributions. Trajectory log files can be converted into RTPlan files for dose distribution calculation (Eclipse, AAA10.0.28, VMS). Results: All leaf positions are within +/−0.10mm: 57% within +/−0.01mm; 89% within 0.05mm. Mean leaf position deviation is 0.02mm. Gantry angle variations lie in the range −0.1 to 0.3 degrees, mean 0.04 degrees. Fluence verification shows excellent agreement between planned and treated fluence. Agreement between planned and treated dose distribution, the derived from log files, is very good. Conclusion: Automated log file analysis is a valuable tool for the busy physicist, enabling potential treated fluence distribution errors to be quickly identified. In the near future we will correlate trajectory log analysis with routine IMRT/VMAT QA analysis. This has the potential to reduce, but not eliminate, the QA workload.« less

  7. Improved grading system for structural logs for log homes

    Treesearch

    D.W. Green; T.M. Gorman; J.W. Evans; J.F. Murphy

    2004-01-01

    Current grading standards for logs used in log home construction use visual criteria to sort logs into either “wall logs” or structural logs (round and sawn round timbers). The conservative nature of this grading system, and the grouping of stronger and weaker species for marketing purposes, probably results in the specification of logs with larger diameter than would...

  8. Parallel compression of data chunks of a shared data object using a log-structured file system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-10-25

    Techniques are provided for parallel compression of data chunks being written to a shared object. A client executing on a compute node or a burst buffer node in a parallel computing system stores a data chunk generated by the parallel computing system to a shared data object on a storage node by compressing the data chunk; and providing the data compressed data chunk to the storage node that stores the shared object. The client and storage node may employ Log-Structured File techniques. The compressed data chunk can be de-compressed by the client when the data chunk is read. A storagemore » node stores a data chunk as part of a shared object by receiving a compressed version of the data chunk from a compute node; and storing the compressed version of the data chunk to the shared data object on the storage node.« less

  9. Catching errors with patient-specific pretreatment machine log file analysis.

    PubMed

    Rangaraj, Dharanipathy; Zhu, Mingyao; Yang, Deshan; Palaniswaamy, Geethpriya; Yaddanapudi, Sridhar; Wooten, Omar H; Brame, Scott; Mutic, Sasa

    2013-01-01

    A robust, efficient, and reliable quality assurance (QA) process is highly desired for modern external beam radiation therapy treatments. Here, we report the results of a semiautomatic, pretreatment, patient-specific QA process based on dynamic machine log file analysis clinically implemented for intensity modulated radiation therapy (IMRT) treatments delivered by high energy linear accelerators (Varian 2100/2300 EX, Trilogy, iX-D, Varian Medical Systems Inc, Palo Alto, CA). The multileaf collimator machine (MLC) log files are called Dynalog by Varian. Using an in-house developed computer program called "Dynalog QA," we automatically compare the beam delivery parameters in the log files that are generated during pretreatment point dose verification measurements, with the treatment plan to determine any discrepancies in IMRT deliveries. Fluence maps are constructed and compared between the delivered and planned beams. Since clinical introduction in June 2009, 912 machine log file analyses QA were performed by the end of 2010. Among these, 14 errors causing dosimetric deviation were detected and required further investigation and intervention. These errors were the result of human operating mistakes, flawed treatment planning, and data modification during plan file transfer. Minor errors were also reported in 174 other log file analyses, some of which stemmed from false positives and unreliable results; the origins of these are discussed herein. It has been demonstrated that the machine log file analysis is a robust, efficient, and reliable QA process capable of detecting errors originating from human mistakes, flawed planning, and data transfer problems. The possibility of detecting these errors is low using point and planar dosimetric measurements. Copyright © 2013 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  10. Monte Carlo based, patient-specific RapidArc QA using Linac log files.

    PubMed

    Teke, Tony; Bergman, Alanah M; Kwa, William; Gill, Bradford; Duzenli, Cheryl; Popescu, I Antoniu

    2010-01-01

    A Monte Carlo (MC) based QA process to validate the dynamic beam delivery accuracy for Varian RapidArc (Varian Medical Systems, Palo Alto, CA) using Linac delivery log files (DynaLog) is presented. Using DynaLog file analysis and MC simulations, the goal of this article is to (a) confirm that adequate sampling is used in the RapidArc optimization algorithm (177 static gantry angles) and (b) to assess the physical machine performance [gantry angle and monitor unit (MU) delivery accuracy]. Ten clinically acceptable RapidArc treatment plans were generated for various tumor sites and delivered to a water-equivalent cylindrical phantom on the treatment unit. Three Monte Carlo simulations were performed to calculate dose to the CT phantom image set: (a) One using a series of static gantry angles defined by 177 control points with treatment planning system (TPS) MLC control files (planning files), (b) one using continuous gantry rotation with TPS generated MLC control files, and (c) one using continuous gantry rotation with actual Linac delivery log files. Monte Carlo simulated dose distributions are compared to both ionization chamber point measurements and with RapidArc TPS calculated doses. The 3D dose distributions were compared using a 3D gamma-factor analysis, employing a 3%/3 mm distance-to-agreement criterion. The dose difference between MC simulations, TPS, and ionization chamber point measurements was less than 2.1%. For all plans, the MC calculated 3D dose distributions agreed well with the TPS calculated doses (gamma-factor values were less than 1 for more than 95% of the points considered). Machine performance QA was supplemented with an extensive DynaLog file analysis. A DynaLog file analysis showed that leaf position errors were less than 1 mm for 94% of the time and there were no leaf errors greater than 2.5 mm. The mean standard deviation in MU and gantry angle were 0.052 MU and 0.355 degrees, respectively, for the ten cases analyzed. The accuracy and

  11. TH-AB-201-12: Using Machine Log-Files for Treatment Planning and Delivery QA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stanhope, C; Liang, J; Drake, D

    2016-06-15

    Purpose: To determine the segment reduction and dose resolution necessary for machine log-files to effectively replace current phantom-based patient-specific quality assurance, while minimizing computational cost. Methods: Elekta’s Log File Convertor R3.2 records linac delivery parameters (dose rate, gantry angle, leaf position) every 40ms. Five VMAT plans [4 H&N, 1 Pulsed Brain] comprised of 2 arcs each were delivered on the ArcCHECK phantom. Log-files were reconstructed in Pinnacle on the phantom geometry using 1/2/3/4° control point spacing and 2/3/4mm dose grid resolution. Reconstruction effectiveness was quantified by comparing 2%/2mm gamma passing rates of the original and log-file plans. Modulation complexity scoresmore » (MCS) were calculated for each beam to correlate reconstruction accuracy and beam modulation. Percent error in absolute dose for each plan-pair combination (log-file vs. ArcCHECK, original vs. ArcCHECK, log-file vs. original) was calculated for each arc and every diode greater than 10% of the maximum measured dose (per beam). Comparing standard deviations of the three plan-pair distributions, relative noise of the ArcCHECK and log-file systems was elucidated. Results: The original plans exhibit a mean passing rate of 95.1±1.3%. The eight more modulated H&N arcs [MCS=0.088±0.014] and two less modulated brain arcs [MCS=0.291±0.004] yielded log-file pass rates most similar to the original plan when using 1°/2mm [0.05%±1.3% lower] and 2°/3mm [0.35±0.64% higher] log-file reconstructions respectively. Log-file and original plans displayed percent diode dose errors 4.29±6.27% and 3.61±6.57% higher than measurement. Excluding the phantom eliminates diode miscalibration and setup errors; log-file dose errors were 0.72±3.06% higher than the original plans – significantly less noisy. Conclusion: For log-file reconstructed VMAT arcs, 1° control point spacing and 2mm dose resolution is recommended, however, less modulated arcs may

  12. Log file-based patient dose calculations of double-arc VMAT for head-and-neck radiotherapy.

    PubMed

    Katsuta, Yoshiyuki; Kadoya, Noriyuki; Fujita, Yukio; Shimizu, Eiji; Majima, Kazuhiro; Matsushita, Haruo; Takeda, Ken; Jingu, Keiichi

    2018-04-01

    The log file-based method cannot display dosimetric changes due to linac component miscalibration because of the insensitivity of log files to linac component miscalibration. The purpose of this study was to supply dosimetric changes in log file-based patient dose calculations for double-arc volumetric-modulated arc therapy (VMAT) in head-and-neck cases. Fifteen head-and-neck cases participated in this study. For each case, treatment planning system (TPS) doses were produced by double-arc and single-arc VMAT. Miscalibration-simulated log files were generated by inducing a leaf miscalibration of ±0.5 mm into the log files that were acquired during VMAT irradiation. Subsequently, patient doses were estimated using the miscalibration-simulated log files. For double-arc VMAT, regarding planning target volume (PTV), the change from TPS dose to miscalibration-simulated log file dose in D mean was 0.9 Gy and that for tumor control probability was 1.4%. As for organ-at-risks (OARs), the change in D mean was <0.7 Gy and normal tissue complication probability was <1.8%. A comparison between double-arc and single-arc VMAT for PTV showed statistically significant differences in the changes evaluated by D mean and radiobiological metrics (P < 0.01), even though the magnitude of these differences was small. Similarly, for OARs, the magnitude of these changes was found to be small. Using the log file-based method for PTV and OARs, the log file-based method estimate of patient dose using the double-arc VMAT has accuracy comparable to that obtained using the single-arc VMAT. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  13. SU-E-T-473: A Patient-Specific QC Paradigm Based On Trajectory Log Files and DICOM Plan Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeMarco, J; McCloskey, S; Low, D

    Purpose: To evaluate a remote QC tool for monitoring treatment machine parameters and treatment workflow. Methods: The Varian TrueBeamTM linear accelerator is a digital machine that records machine axis parameters and MLC leaf positions as a function of delivered monitor unit or control point. This information is saved to a binary trajectory log file for every treatment or imaging field in the patient treatment session. A MATLAB analysis routine was developed to parse the trajectory log files for a given patient, compare the expected versus actual machine and MLC positions as well as perform a cross-comparison with the DICOM-RT planmore » file exported from the treatment planning system. The parsing routine sorts the trajectory log files based on the time and date stamp and generates a sequential report file listing treatment parameters and provides a match relative to the DICOM-RT plan file. Results: The trajectory log parsing-routine was compared against a standard record and verify listing for patients undergoing initial IMRT dosimetry verification and weekly and final chart QC. The complete treatment course was independently verified for 10 patients of varying treatment site and a total of 1267 treatment fields were evaluated including pre-treatment imaging fields where applicable. In the context of IMRT plan verification, eight prostate SBRT plans with 4-arcs per plan were evaluated based on expected versus actual machine axis parameters. The average value for the maximum RMS MLC error was 0.067±0.001mm and 0.066±0.002mm for leaf bank A and B respectively. Conclusion: A real-time QC analysis program was tested using trajectory log files and DICOM-RT plan files. The parsing routine is efficient and able to evaluate all relevant machine axis parameters during a patient treatment course including MLC leaf positions and table positions at time of image acquisition and during treatment.« less

  14. Zebra: A striped network file system

    NASA Technical Reports Server (NTRS)

    Hartman, John H.; Ousterhout, John K.

    1992-01-01

    The design of Zebra, a striped network file system, is presented. Zebra applies ideas from log-structured file system (LFS) and RAID research to network file systems, resulting in a network file system that has scalable performance, uses its servers efficiently even when its applications are using small files, and provides high availability. Zebra stripes file data across multiple servers, so that the file transfer rate is not limited by the performance of a single server. High availability is achieved by maintaining parity information for the file system. If a server fails its contents can be reconstructed using the contents of the remaining servers and the parity information. Zebra differs from existing striped file systems in the way it stripes file data: Zebra does not stripe on a per-file basis; instead it stripes the stream of bytes written by each client. Clients write to the servers in units called stripe fragments, which are analogous to segments in an LFS. Stripe fragments contain file blocks that were written recently, without regard to which file they belong. This method of striping has numerous advantages over per-file striping, including increased server efficiency, efficient parity computation, and elimination of parity update.

  15. Taming Log Files from Game/Simulation-Based Assessments: Data Models and Data Analysis Tools. Research Report. ETS RR-16-10

    ERIC Educational Resources Information Center

    Hao, Jiangang; Smith, Lawrence; Mislevy, Robert; von Davier, Alina; Bauer, Malcolm

    2016-01-01

    Extracting information efficiently from game/simulation-based assessment (G/SBA) logs requires two things: a well-structured log file and a set of analysis methods. In this report, we propose a generic data model specified as an extensible markup language (XML) schema for the log files of G/SBAs. We also propose a set of analysis methods for…

  16. INSPIRE and SPIRES Log File Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Cole; /Wheaton Coll. /SLAC

    2012-08-31

    SPIRES, an aging high-energy physics publication data base, is in the process of being replaced by INSPIRE. In order to ease the transition from SPIRES to INSPIRE it is important to understand user behavior and the drivers for adoption. The goal of this project was to address some questions in regards to the presumed two-thirds of the users still using SPIRES. These questions are answered through analysis of the log files from both websites. A series of scripts were developed to collect and interpret the data contained in the log files. The common search patterns and usage comparisons are mademore » between INSPIRE and SPIRES, and a method for detecting user frustration is presented. The analysis reveals a more even split than originally thought as well as the expected trend of user transition to INSPIRE.« less

  17. Clinical impact of dosimetric changes for volumetric modulated arc therapy in log file-based patient dose calculations.

    PubMed

    Katsuta, Yoshiyuki; Kadoya, Noriyuki; Fujita, Yukio; Shimizu, Eiji; Matsunaga, Kenichi; Matsushita, Haruo; Majima, Kazuhiro; Jingu, Keiichi

    2017-10-01

    A log file-based method cannot detect dosimetric changes due to linac component miscalibration because log files are insensitive to miscalibration. Herein, clinical impacts of dosimetric changes on a log file-based method were determined. Five head-and-neck and five prostate plans were applied. Miscalibration-simulated log files were generated by inducing a linac component miscalibration into the log file. Miscalibration magnitudes for leaf, gantry, and collimator at the general tolerance level were ±0.5mm, ±1°, and ±1°, respectively, and at a tighter tolerance level achievable on current linac were ±0.3mm, ±0.5°, and ±0.5°, respectively. Re-calculations were performed on patient anatomy using log file data. Changes in tumor control probability/normal tissue complication probability from treatment planning system dose to re-calculated dose at the general tolerance level was 1.8% on planning target volume (PTV) and 2.4% on organs at risk (OARs) in both plans. These changes at the tighter tolerance level were improved to 1.0% on PTV and to 1.5% on OARs, with a statistically significant difference. We determined the clinical impacts of dosimetric changes on a log file-based method using a general tolerance level and a tighter tolerance level for linac miscalibration and found that a tighter tolerance level significantly improved the accuracy of the log file-based method. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  18. SU-E-T-184: Clinical VMAT QA Practice Using LINAC Delivery Log Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, H; Jacobson, T; Gu, X

    2015-06-15

    Purpose: To evaluate the accuracy of volumetric modulated arc therapy (VMAT) treatment delivery dose clouds by comparing linac log data to doses measured using an ionization chamber and film. Methods: A commercial IMRT quality assurance (QA) process utilizing a DICOM-RT framework was tested for clinical practice using 30 prostate and 30 head and neck VMAT plans. Delivered 3D VMAT dose distributions were independently checked using a PinPoint ionization chamber and radiographic film in a solid water phantom. DICOM RT coordinates were used to extract the corresponding point and planar doses from 3D log file dose distributions. Point doses were evaluatedmore » by computing the percent error between log file and chamber measured values. A planar dose evaluation was performed for each plan using a 2D gamma analysis with 3% global dose difference and 3 mm isodose point distance criteria. The same analysis was performed to compare treatment planning system (TPS) doses to measured values to establish a baseline assessment of agreement. Results: The mean percent error between log file and ionization chamber dose was 1.0%±2.1% for prostate VMAT plans and −0.2%±1.4% for head and neck plans. The corresponding TPS calculated and measured ionization chamber values agree within 1.7%±1.6%. The average 2D gamma passing rates for the log file comparison to film are 98.8%±1.0% and 96.2%±4.2% for the prostate and head and neck plans, respectively. The corresponding passing rates for the TPS comparison to film are 99.4%±0.5% and 93.9%±5.1%. Overall, the point dose and film data indicate that log file determined doses are in excellent agreement with measured values. Conclusion: Clinical VMAT QA practice using LINAC treatment log files is a fast and reliable method for patient-specific plan evaluation.« less

  19. Verification of respiratory-gated radiotherapy with new real-time tumour-tracking radiotherapy system using cine EPID images and a log file.

    PubMed

    Shiinoki, Takehiro; Hanazawa, Hideki; Yuasa, Yuki; Fujimoto, Koya; Uehara, Takuya; Shibuya, Keiko

    2017-02-21

    A combined system comprising the TrueBeam linear accelerator and a new real-time tumour-tracking radiotherapy system, SyncTraX, was installed at our institution. The objectives of this study are to develop a method for the verification of respiratory-gated radiotherapy with SyncTraX using cine electronic portal image device (EPID) images and a log file and to verify this treatment in clinical cases. Respiratory-gated radiotherapy was performed using TrueBeam and the SyncTraX system. Cine EPID images and a log file were acquired for a phantom and three patients during the course of the treatment. Digitally reconstructed radiographs (DRRs) were created for each treatment beam using a planning CT set. The cine EPID images, log file, and DRRs were analysed using a developed software. For the phantom case, the accuracy of the proposed method was evaluated to verify the respiratory-gated radiotherapy. For the clinical cases, the intra- and inter-fractional variations of the fiducial marker used as an internal surrogate were calculated to evaluate the gating accuracy and set-up uncertainty in the superior-inferior (SI), anterior-posterior (AP), and left-right (LR) directions. The proposed method achieved high accuracy for the phantom verification. For the clinical cases, the intra- and inter-fractional variations of the fiducial marker were  ⩽3 mm and  ±3 mm in the SI, AP, and LR directions. We proposed a method for the verification of respiratory-gated radiotherapy with SyncTraX using cine EPID images and a log file and showed that this treatment is performed with high accuracy in clinical cases.

  20. Quantification of residual dose estimation error on log file-based patient dose calculation.

    PubMed

    Katsuta, Yoshiyuki; Kadoya, Noriyuki; Fujita, Yukio; Shimizu, Eiji; Matsunaga, Kenichi; Matsushita, Haruo; Majima, Kazuhiro; Jingu, Keiichi

    2016-05-01

    The log file-based patient dose estimation includes a residual dose estimation error caused by leaf miscalibration, which cannot be reflected on the estimated dose. The purpose of this study is to determine this residual dose estimation error. Modified log files for seven head-and-neck and prostate volumetric modulated arc therapy (VMAT) plans simulating leaf miscalibration were generated by shifting both leaf banks (systematic leaf gap errors: ±2.0, ±1.0, and ±0.5mm in opposite directions and systematic leaf shifts: ±1.0mm in the same direction) using MATLAB-based (MathWorks, Natick, MA) in-house software. The generated modified and non-modified log files were imported back into the treatment planning system and recalculated. Subsequently, the generalized equivalent uniform dose (gEUD) was quantified for the definition of the planning target volume (PTV) and organs at risks. For MLC leaves calibrated within ±0.5mm, the quantified residual dose estimation errors that obtained from the slope of the linear regression of gEUD changes between non- and modified log file doses per leaf gap are in head-and-neck plans 1.32±0.27% and 0.82±0.17Gy for PTV and spinal cord, respectively, and in prostate plans 1.22±0.36%, 0.95±0.14Gy, and 0.45±0.08Gy for PTV, rectum, and bladder, respectively. In this work, we determine the residual dose estimation errors for VMAT delivery using the log file-based patient dose calculation according to the MLC calibration accuracy. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  1. Cooperative storage of shared files in a parallel computing system with dynamic block size

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-11-10

    Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).

  2. Verification of respiratory-gated radiotherapy with new real-time tumour-tracking radiotherapy system using cine EPID images and a log file

    NASA Astrophysics Data System (ADS)

    Shiinoki, Takehiro; Hanazawa, Hideki; Yuasa, Yuki; Fujimoto, Koya; Uehara, Takuya; Shibuya, Keiko

    2017-02-01

    A combined system comprising the TrueBeam linear accelerator and a new real-time tumour-tracking radiotherapy system, SyncTraX, was installed at our institution. The objectives of this study are to develop a method for the verification of respiratory-gated radiotherapy with SyncTraX using cine electronic portal image device (EPID) images and a log file and to verify this treatment in clinical cases. Respiratory-gated radiotherapy was performed using TrueBeam and the SyncTraX system. Cine EPID images and a log file were acquired for a phantom and three patients during the course of the treatment. Digitally reconstructed radiographs (DRRs) were created for each treatment beam using a planning CT set. The cine EPID images, log file, and DRRs were analysed using a developed software. For the phantom case, the accuracy of the proposed method was evaluated to verify the respiratory-gated radiotherapy. For the clinical cases, the intra- and inter-fractional variations of the fiducial marker used as an internal surrogate were calculated to evaluate the gating accuracy and set-up uncertainty in the superior-inferior (SI), anterior-posterior (AP), and left-right (LR) directions. The proposed method achieved high accuracy for the phantom verification. For the clinical cases, the intra- and inter-fractional variations of the fiducial marker were  ⩽3 mm and  ±3 mm in the SI, AP, and LR directions. We proposed a method for the verification of respiratory-gated radiotherapy with SyncTraX using cine EPID images and a log file and showed that this treatment is performed with high accuracy in clinical cases. This work was partly presented at the 58th Annual meeting of American Association of Physicists in Medicine.

  3. Log ASCII Standard (LAS) Files for Geophysical Wireline Well Logs and Their Application to Geologic Cross Sections Through the Central Appalachian Basin

    USGS Publications Warehouse

    Crangle, Robert D.

    2007-01-01

    Introduction The U.S. Geological Survey (USGS) uses geophysical wireline well logs for a variety of purposes, including stratigraphic correlation (Hettinger, 2001, Ryder, 2002), petroleum reservoir analyses (Nelson and Bird, 2005), aquifer studies (Balch, 1988), and synthetic seismic profiles (Kulander and Ryder, 2005). Commonly, well logs are easier to visualize, manipulate, and interpret when available in a digital format. In recent geologic cross sections E-E' and D-D', constructed through the central Appalachian basin (Ryder, Swezey, and others, in press; Ryder, Crangle, and others, in press), gamma ray well log traces and lithologic logs were used to correlate key stratigraphic intervals (Fig. 1). The stratigraphy and structure of the cross sections are illustrated through the use of graphical software applications (e.g., Adobe Illustrator). The gamma ray traces were digitized in Neuralog (proprietary software) from paper well logs and converted to a Log ASCII Standard (LAS) format. Once converted, the LAS files were transformed to images through an LAS-reader application (e.g., GeoGraphix Prizm) and then overlain in positions adjacent to well locations, used for stratigraphic control, on each cross section. This report summarizes the procedures used to convert paper logs to a digital LAS format using a third-party software application, Neuralog. Included in this report are LAS files for sixteen wells used in geologic cross section E-E' (Table 1) and thirteen wells used in geologic cross section D-D' (Table 2).

  4. SU-F-T-233: Evaluation of Treatment Delivery Parameters Using High Resolution ELEKTA Log Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kabat, C; Defoor, D; Alexandrian, A

    2016-06-15

    Purpose: As modern linacs have become more technologically advanced with the implementation of IGRT and IMRT with HDMLCs, a requirement for more elaborate tracking techniques to monitor components’ integrity is paramount. ElektaLog files are generated every 40 milliseconds, which can be analyzed to track subtle changes and provide another aspect of quality assurance. This allows for constant monitoring of fraction consistency in addition to machine reliability. With this in mind, it was the aim of the study to evaluate if ElektaLog files can be utilized for linac consistency QA. Methods: ElektaLogs were reviewed for 16 IMRT patient plans with >16more » fractions. Logs were analyzed by creating fluence maps from recorded values of MLC locations, jaw locations, and dose per unit time. Fluence maps were then utilized to calculate a 2D gamma index with a 2%–2mm criteria for each fraction. ElektaLogs were also used to analyze positional errors for MLC leaves and jaws, which were used to compute an overall error for the MLC banks, Y-jaws, and X-jaws by taking the root-meansquare value of the individual recorded errors during treatment. Additionally, beam on time was calculated using the number of ElektaLog file entries within the file. Results: The average 2D gamma for all 16 patient plans was found to be 98.0±2.0%. Recorded gamma index values showed an acceptable correlation between fractions. Average RMS values for MLC leaves and the jaws resulted in a leaf variation of roughly 0.3±0.08 mm and jaw variation of about 0.15±0.04 mm, both of which fall within clinical tolerances. Conclusion: The use of ElektaLog files for day-to-day evaluation of linac integrity and patient QA can be utilized to allow for reliable analysis of system accuracy and performance.« less

  5. Identification and Management of Pump Thrombus in the HeartWare Left Ventricular Assist Device System: A Novel Approach Using Log File Analysis.

    PubMed

    Jorde, Ulrich P; Aaronson, Keith D; Najjar, Samer S; Pagani, Francis D; Hayward, Christopher; Zimpfer, Daniel; Schlöglhofer, Thomas; Pham, Duc T; Goldstein, Daniel J; Leadley, Katrin; Chow, Ming-Jay; Brown, Michael C; Uriel, Nir

    2015-11-01

    The study sought to characterize patterns in the HeartWare (HeartWare Inc., Framingham, Massachusetts) ventricular assist device (HVAD) log files associated with successful medical treatment of device thrombosis. Device thrombosis is a serious adverse event for mechanical circulatory support devices and is often preceded by increased power consumption. Log files of the pump power are easily accessible on the bedside monitor of HVAD patients and may allow early diagnosis of device thrombosis. Furthermore, analysis of the log files may be able to predict the success rate of thrombolysis or the need for pump exchange. The log files of 15 ADVANCE trial patients (algorithm derivation cohort) with 16 pump thrombus events treated with tissue plasminogen activator (tPA) were assessed for changes in the absolute and rate of increase in power consumption. Successful thrombolysis was defined as a clinical resolution of pump thrombus including normalization of power consumption and improvement in biochemical markers of hemolysis. Significant differences in log file patterns between successful and unsuccessful thrombolysis treatments were verified in 43 patients with 53 pump thrombus events implanted outside of clinical trials (validation cohort). The overall success rate of tPA therapy was 57%. Successful treatments had significantly lower measures of percent of expected power (130.9% vs. 196.1%, p = 0.016) and rate of increase in power (0.61 vs. 2.87, p < 0.0001). Medical therapy was successful in 77.7% of the algorithm development cohort and 81.3% of the validation cohort when the rate of power increase and percent of expected power values were <1.25% and 200%, respectively. Log file parameters can potentially predict the likelihood of successful tPA treatments and if validated prospectively, could substantially alter the approach to thrombus management. Copyright © 2015 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  6. An EXCEL macro for importing log ASCII standard (LAS) files into EXCEL worksheets

    NASA Astrophysics Data System (ADS)

    Özkaya, Sait Ismail

    1996-02-01

    An EXCEL 5.0 macro is presented for converting a LAS text file into an EXCEL worksheet. Although EXCEL has commands for importing text files and parsing text lines, LAS files must be decoded line-by-line because three different delimiters are used to separate fields of differing length. The macro is intended to eliminate manual decoding of LAS version 2.0. LAS is a floppy disk format for storage and transfer of log data as text files. LAS was proposed by the Canadian Well Logging Society. The present EXCEL macro decodes different sections of a LAS file, separates, and places the fields into different columns of an EXCEL worksheet. To import a LAS file into EXCEL without errors, the file must not contain any unrecognized symbols, and the data section must be the last section. The program does not check for the presence of mandatory sections or fields as required by LAS rules. Once a file is incorporated into EXCEL, mandatory sections and fields may be inspected visually.

  7. Analyzing Log Files to Predict Students' Problem Solving Performance in a Computer-Based Physics Tutor

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2015-01-01

    This study investigates whether information saved in the log files of a computer-based tutor can be used to predict the problem solving performance of students. The log files of a computer-based physics tutoring environment called Andes Physics Tutor was analyzed to build a logistic regression model that predicted success and failure of students'…

  8. Replication in the Harp File System

    DTIC Science & Technology

    1981-07-01

    Shrira Michael Williams iadly 1991 © Massachusetts Institute of Technology (To appear In the Proceedings of the Thirteenth ACM Symposium on Operating...S., Spector, A. Z., and Thompson, D. S. Distributed Logging for Transaction Processing. ACM Special Interest Group on Management of Data 1987 Annual ...System. USENIX Conference Proceedings , June, 1990, pp. 63-71. 15. Hagmann, R. Reimplementing the Cedar File System Using Logging and Group Commit

  9. Building analytical platform with Big Data solutions for log files of PanDA infrastructure

    NASA Astrophysics Data System (ADS)

    Alekseev, A. A.; Barreiro Megino, F. G.; Klimentov, A. A.; Korchuganova, T. A.; Maendo, T.; Padolski, S. V.

    2018-05-01

    The paper describes the implementation of a high-performance system for the processing and analysis of log files for the PanDA infrastructure of the ATLAS experiment at the Large Hadron Collider (LHC), responsible for the workload management of order of 2M daily jobs across the Worldwide LHC Computing Grid. The solution is based on the ELK technology stack, which includes several components: Filebeat, Logstash, ElasticSearch (ES), and Kibana. Filebeat is used to collect data from logs. Logstash processes data and export to Elasticsearch. ES are responsible for centralized data storage. Accumulated data in ES can be viewed using a special software Kibana. These components were integrated with the PanDA infrastructure and replaced previous log processing systems for increased scalability and usability. The authors will describe all the components and their configuration tuning for the current tasks, the scale of the actual system and give several real-life examples of how this centralized log processing and storage service is used to showcase the advantages for daily operations.

  10. Parallel file system with metadata distributed across partitioned key-value store c

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron

    2017-09-19

    Improved techniques are provided for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. The shared file is generated by a plurality of applications executing on a plurality of compute nodes. A compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file generated by an application executing on the compute node and metadata for the at least one portion of the shared file on one or more object storage servers. The compute node is also configured to implement a partitioned data store for storing a partition of the metadata for the shared file, wherein the partitioned data store communicates with partitioned data stores on other compute nodes using a message passing interface. The partitioned data store can be implemented, for example, using Multidimensional Data Hashing Indexing Middleware (MDHIM).

  11. SU-E-T-392: Evaluation of Ion Chamber/film and Log File Based QA to Detect Delivery Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, C; Mason, B; Kirsner, S

    2015-06-15

    Purpose: Ion chamber and film (ICAF) is a method used to verify patient dose prior to treatment. More recently, log file based QA has been shown as an alternative for measurement based QA. In this study, we delivered VMAT plans with and without errors to determine if ICAF and/or log file based QA was able to detect the errors. Methods: Using two VMAT patients, the original treatment plan plus 7 additional plans with delivery errors introduced were generated and delivered. The erroneous plans had gantry, collimator, MLC, gantry and collimator, collimator and MLC, MLC and gantry, and gantry, collimator, andmore » MLC errors. The gantry and collimator errors were off by 4{sup 0} for one of the two arcs. The MLC error introduced was one in which the opening aperture didn’t move throughout the delivery of the field. For each delivery, an ICAF measurement was made as well as a dose comparison based upon log files. Passing criteria to evaluate the plans were ion chamber less and 5% and film 90% of pixels pass the 3mm/3% gamma analysis(GA). For log file analysis 90% of voxels pass the 3mm/3% 3D GA and beam parameters match what was in the plan. Results: Two original plans were delivered and passed both ICAF and log file base QA. Both ICAF and log file QA met the dosimetry criteria on 4 of the 12 erroneous cases analyzed (2 cases were not analyzed). For the log file analysis, all 12 erroneous plans alerted a mismatch in delivery versus what was planned. The 8 plans that didn’t meet criteria all had MLC errors. Conclusion: Our study demonstrates that log file based pre-treatment QA was able to detect small errors that may not be detected using an ICAF and both methods of were able to detect larger delivery errors.« less

  12. A Varian DynaLog file-based procedure for patient dose-volume histogram-based IMRT QA.

    PubMed

    Calvo-Ortega, Juan F; Teke, Tony; Moragues, Sandra; Pozo, Miquel; Casals-Farran, Joan

    2014-03-06

    In the present study, we describe a method based on the analysis of the dynamic MLC log files (DynaLog) generated by the controller of a Varian linear accelerator in order to perform patient-specific IMRT QA. The DynaLog files of a Varian Millennium MLC, recorded during an IMRT treatment, can be processed using a MATLAB-based code in order to generate the actual fluence for each beam and so recalculate the actual patient dose distribution using the Eclipse treatment planning system. The accuracy of the DynaLog-based dose reconstruction procedure was assessed by introducing ten intended errors to perturb the fluence of the beams of a reference plan such that ten subsequent erroneous plans were generated. In-phantom measurements with an ionization chamber (ion chamber) and planar dose measurements using an EPID system were performed to investigate the correlation between the measured dose changes and the expected ones detected by the reconstructed plans for the ten intended erroneous cases. Moreover, the method was applied to 20 cases of clinical plans for different locations (prostate, lung, breast, and head and neck). A dose-volume histogram (DVH) metric was used to evaluate the impact of the delivery errors in terms of dose to the patient. The ionometric measurements revealed a significant positive correlation (R² = 0.9993) between the variations of the dose induced in the erroneous plans with respect to the reference plan and the corresponding changes indicated by the DynaLog-based reconstructed plans. The EPID measurements showed that the accuracy of the DynaLog-based method to reconstruct the beam fluence was comparable with the dosimetric resolution of the portal dosimetry used in this work (3%/3 mm). The DynaLog-based reconstruction method described in this study is a suitable tool to perform a patient-specific IMRT QA. This method allows us to perform patient-specific IMRT QA by evaluating the result based on the DVH metric of the planning CT image (patient

  13. LogScope

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Smith, Margaret H.; Barringer, Howard; Groce, Alex

    2012-01-01

    LogScope is a software package for analyzing log files. The intended use is for offline post-processing of such logs, after the execution of the system under test. LogScope can, however, in principle, also be used to monitor systems online during their execution. Logs are checked against requirements formulated as monitors expressed in a rule-based specification language. This language has similarities to a state machine language, but is more expressive, for example, in its handling of data parameters. The specification language is user friendly, simple, and yet expressive enough for many practical scenarios. The LogScope software was initially developed to specifically assist in testing JPL s Mars Science Laboratory (MSL) flight software, but it is very generic in nature and can be applied to any application that produces some form of logging information (which almost any software does).

  14. Extracting the Textual and Temporal Structure of Supercomputing Logs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jain, S; Singh, I; Chandra, A

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an onlinemore » clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.« less

  15. SU-E-T-784: Using MLC Log Files for Daily IMRT Delivery Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stathakis, S; Defoor, D; Linden, P

    2015-06-15

    Purpose: To verify daily intensity modulated radiation therapy (IMRT) treatments using multi-leaf collimator (MLC) log files. Methods: The MLC log files from a NovalisTX Varian linear accelerator were used in this study. The MLC files were recorded daily for all patients undergoing IMRT or volumetric modulated arc therapy (VMAT). The first record of each patient was used as reference and all records for subsequent days were compared against the reference. An in house MATLAB software code was used for the comparisons. Each MLC log file was converted to a fluence map (FM) and a gamma index (γ) analysis was usedmore » for the evaluation of each daily delivery for every patient. The tolerance for the gamma index was set to 2% dose difference and 2mm distance to agreement while points with signal of 10% or lower of the maximum value were excluded from the comparisons. Results: The γ between each of the reference FMs and the consecutive daily fraction FMs had an average value of 99.1% (ranged from 98.2 to 100.0%). The FM images were reconstructed at various resolutions in order to study the effect of the resolution on the γ and at the same time reduce the time for processing the images. We found that the comparison of images with the highest resolution (768×1024) yielded on average a lower γ (99.1%) than the ones with low resolution (192×256) (γ 99.5%). Conclusion: We developed an in-house software that allows us to monitor the quality of daily IMRT and VMAT treatment deliveries using information from the MLC log files of the linear accelerator. The information can be analyzed and evaluated as early as after the completion of each daily treatment. Such tool can be valuable to assess the effect of MLC positioning on plan quality, especially in the context of adaptive radiotherapy.« less

  16. SU-F-T-288: Impact of Trajectory Log Files for Clarkson-Based Independent Dose Verification of IMRT and VMAT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takahashi, R; Kamima, T; Tachibana, H

    2016-06-15

    Purpose: To investigate the effect of the trajectory files from linear accelerator for Clarkson-based independent dose verification in IMRT and VMAT plans. Methods: A CT-based independent dose verification software (Simple MU Analysis: SMU, Triangle Products, Japan) with a Clarksonbased algorithm was modified to calculate dose using the trajectory log files. Eclipse with the three techniques of step and shoot (SS), sliding window (SW) and Rapid Arc (RA) was used as treatment planning system (TPS). In this study, clinically approved IMRT and VMAT plans for prostate and head and neck (HN) at two institutions were retrospectively analyzed to assess the dosemore » deviation between DICOM-RT plan (PL) and trajectory log file (TJ). An additional analysis was performed to evaluate MLC error detection capability of SMU when the trajectory log files was modified by adding systematic errors (0.2, 0.5, 1.0 mm) and random errors (5, 10, 30 mm) to actual MLC position. Results: The dose deviations for prostate and HN in the two sites were 0.0% and 0.0% in SS, 0.1±0.0%, 0.1±0.1% in SW and 0.6±0.5%, 0.7±0.9% in RA, respectively. The MLC error detection capability shows the plans for HN IMRT were the most sensitive and 0.2 mm of systematic error affected 0.7% dose deviation on average. Effect of the MLC random error did not affect dose error. Conclusion: The use of trajectory log files including actual information of MLC location, gantry angle, etc should be more effective for an independent verification. The tolerance level for the secondary check using the trajectory file may be similar to that of the verification using DICOM-RT plan file. From the view of the resolution of MLC positional error detection, the secondary check could detect the MLC position error corresponding to the treatment sites and techniques. This research is partially supported by Japan Agency for Medical Research and Development (AMED)« less

  17. Linking log files with dosimetric accuracy--A multi-institutional study on quality assurance of volumetric modulated arc therapy.

    PubMed

    Pasler, Marlies; Kaas, Jochem; Perik, Thijs; Geuze, Job; Dreindl, Ralf; Künzler, Thomas; Wittkamper, Frits; Georg, Dietmar

    2015-12-01

    To systematically evaluate machine specific quality assurance (QA) for volumetric modulated arc therapy (VMAT) based on log files by applying a dynamic benchmark plan. A VMAT benchmark plan was created and tested on 18 Elekta linacs (13 MLCi or MLCi2, 5 Agility) at 4 different institutions. Linac log files were analyzed and a delivery robustness index was introduced. For dosimetric measurements an ionization chamber array was used. Relative dose deviations were assessed by mean gamma for each control point and compared to the log file evaluation. Fourteen linacs delivered the VMAT benchmark plan, while 4 linacs failed by consistently terminating the delivery. The mean leaf error (±1SD) was 0.3±0.2 mm for all linacs. Large MLC maximum errors up to 6.5 mm were observed at reversal positions. Delivery robustness index accounting for MLC position correction (0.8-1.0) correlated with delivery time (80-128 s) and depended on dose rate performance. Dosimetric evaluation indicated in general accurate plan reproducibility with γ(mean)(±1 SD)=0.4±0.2 for 1 mm/1%. However single control point analysis revealed larger deviations and attributed well to log file analysis. The designed benchmark plan helped identify linac related malfunctions in dynamic mode for VMAT. Log files serve as an important additional QA measure to understand and visualize dynamic linac parameters. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Sawmill: A Logging File System for a High-Performance RAID Disk Array

    DTIC Science & Technology

    1995-01-01

    from limiting disk performance, new controller architectures connect the disks directly to the network so that data movement bypasses the file server...These developments raise two questions for file systems: how to get the best performance from a RAID, and how to use such a controller architecture ...the RAID-II storage system; this architecture provides a fast data path that moves data rapidly among the disks, high-speed controller memory, and the

  19. SU-F-T-295: MLCs Performance and Patient-Specific IMRT QA Using Log File Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osman, A; American University of Biuret Medical Center, Biuret; Maalej, N

    2016-06-15

    Purpose: To analyze the performance of the multi-leaf collimators (MLCs) from the log files recorded during the intensity modulated radiotherapy (IMRT) treatment and to construct the relative fluence maps and do the gamma analysis to compare the planned and executed MLCs movement. Methods: We developed a program to extract and analyze the data from dynamic log files (dynalog files) generated from sliding window IMRT delivery treatments. The program extracts the planned and executed (actual or delivered) MLCs movement, calculates and compares the relative planned and executed fluences. The fluence maps were used to perform the gamma analysis (with 3% dosemore » difference and 3 mm distance to agreement) for 3 IMR patients. We compared our gamma analysis results with those obtained from portal dose image prediction (PDIP) algorithm performed using the EPID. Results: For 3 different IMRT patient treatments, the maximum difference between the planned and the executed MCLs positions was 1.2 mm. The gamma analysis results of the planned and delivered fluences were in good agreement with the gamma analysis from portal dosimetry. The maximum difference for number of pixels passing the gamma criteria (3%/3mm) was 0.19% with respect to portal dosimetry results. Conclusion: MLC log files can be used to verify the performance of the MLCs. Patientspecific IMRT QA based on MLC movement log files gives similar results to EPID dosimetry results. This promising method for patient-specific IMRT QA is fast, does not require dose measurements in a phantom, can be done before the treatment and for every fraction, and significantly reduces the IMRT workload. The author would like to thank King Fahd University of petroleum and Minerals for the support.« less

  20. The key image and case log application: new radiology software for teaching file creation and case logging that incorporates elements of a social network.

    PubMed

    Rowe, Steven P; Siddiqui, Adeel; Bonekamp, David

    2014-07-01

    To create novel radiology key image software that is easy to use for novice users, incorporates elements adapted from social networking Web sites, facilitates resident and fellow education, and can serve as the engine for departmental sharing of interesting cases and follow-up studies. Using open-source programming languages and software, radiology key image software (the key image and case log application, KICLA) was developed. This system uses a lightweight interface with the institutional picture archiving and communications systems and enables the storage of key images, image series, and cine clips. It was designed to operate with minimal disruption to the radiologists' daily workflow. Many features of the user interface have been inspired by social networking Web sites, including image organization into private or public folders, flexible sharing with other users, and integration of departmental teaching files into the system. We also review the performance, usage, and acceptance of this novel system. KICLA was implemented at our institution and achieved widespread popularity among radiologists. A large number of key images have been transmitted to the system since it became available. After this early experience period, the most commonly encountered radiologic modalities are represented. A survey distributed to users revealed that most of the respondents found the system easy to use (89%) and fast at allowing them to record interesting cases (100%). Hundred percent of respondents also stated that they would recommend a system such as KICLA to their colleagues. The system described herein represents a significant upgrade to the Digital Imaging and Communications in Medicine teaching file paradigm with efforts made to maximize its ease of use and inclusion of characteristics inspired by social networking Web sites that allow the system additional functionality such as individual case logging. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  1. A Clustering Methodology of Web Log Data for Learning Management Systems

    ERIC Educational Resources Information Center

    Valsamidis, Stavros; Kontogiannis, Sotirios; Kazanidis, Ioannis; Theodosiou, Theodosios; Karakos, Alexandros

    2012-01-01

    Learning Management Systems (LMS) collect large amounts of data. Data mining techniques can be applied to analyse their web data log files. The instructors may use this data for assessing and measuring their courses. In this respect, we have proposed a methodology for analysing LMS courses and students' activity. This methodology uses a Markov…

  2. ATLAS, an integrated structural analysis and design system. Volume 4: Random access file catalog

    NASA Technical Reports Server (NTRS)

    Gray, F. P., Jr. (Editor)

    1979-01-01

    A complete catalog is presented for the random access files used by the ATLAS integrated structural analysis and design system. ATLAS consists of several technical computation modules which output data matrices to corresponding random access file. A description of the matrices written on these files is contained herein.

  3. Logs Perl Module

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owen, R. K.

    2007-04-04

    A perl module designed to read and parse the voluminous set of event or accounting log files produced by a Portable Batch System (PBS) server. This module can filter on date-time and/or record type. The data can be returned in a variety of formats.

  4. DSSTOX MASTER STRUCTURE-INDEX FILE: SDF FILE AND ...

    EPA Pesticide Factsheets

    The DSSTox Master Structure-Index File serves to consolidate, manage, and ensure quality and uniformity of the chemical and substance information spanning all DSSTox Structure Data Files, including those in development but not yet published separately on this website. The DSSTox Master Structure-Index File serves to consolidate, manage, and ensure quality and uniformity of the chemical and substance information spanning all DSSTox Structure Data Files, including those in development but not yet published separately on this website.

  5. Predicting Correctness of Problem Solving from Low-Level Log Data in Intelligent Tutoring Systems

    ERIC Educational Resources Information Center

    Cetintas, Suleyman; Si, Luo; Xin, Yan Ping; Hord, Casey

    2009-01-01

    This paper proposes a learning based method that can automatically determine how likely a student is to give a correct answer to a problem in an intelligent tutoring system. Only log files that record students' actions with the system are used to train the model, therefore the modeling process doesn't require expert knowledge for identifying…

  6. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

  7. Flexibility and Performance of Parallel File Systems

    NASA Technical Reports Server (NTRS)

    Kotz, David; Nieuwejaar, Nils

    1996-01-01

    As we gain experience with parallel file systems, it becomes increasingly clear that a single solution does not suit all applications. For example, it appears to be impossible to find a single appropriate interface, caching policy, file structure, or disk-management strategy. Furthermore, the proliferation of file-system interfaces and abstractions make applications difficult to port. We propose that the traditional functionality of parallel file systems be separated into two components: a fixed core that is standard on all platforms, encapsulating only primitive abstractions and interfaces, and a set of high-level libraries to provide a variety of abstractions and application-programmer interfaces (API's). We present our current and next-generation file systems as examples of this structure. Their features, such as a three-dimensional file structure, strided read and write interfaces, and I/O-node programs, are specifically designed with the flexibility and performance necessary to support a wide range of applications.

  8. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  9. Highway Safety Information System guidebook for the Minnesota state data files. Volume 1 : SAS file formats

    DOT National Transportation Integrated Search

    2001-02-01

    The Minnesota data system includes the following basic files: Accident data (Accident File, Vehicle File, Occupant File); Roadlog File; Reference Post File; Traffic File; Intersection File; Bridge (Structures) File; and RR Grade Crossing File. For ea...

  10. Evaluated nuclear structure data file

    NASA Astrophysics Data System (ADS)

    Tuli, J. K.

    1996-02-01

    The Evaluated Nuclear Structure Data File (ENSDF) contains the evaluated nuclear properties of all known nuclides, as derived both from nuclear reaction and radioactive decay measurements. All experimental data are evaluated to create the adopted properties for each nuclide. ENSDF, together with other numeric and bibliographic files, can be accessed on-line through the INTERNET or modem, and some of the databases are also available on the World Wide Web. The structure and the scope of ENSDF are presented along with the on-line access system of the National Nuclear Data Center at Brookhaven National Laboratory.

  11. Evaluated nuclear structure data file

    NASA Astrophysics Data System (ADS)

    Tuli, J. K.

    The Evaluated Nuclear Structure Data File (ENSDF) contains the evaluated nuclear properties of all known nuclides. These properties are derived both from nuclear reaction and radioactive decay measurements. All experimental data are evaluated to create the adopted properties for each nuclide. ENSDF, together with other numeric and biographic files, can be accessed on-line through the INTERNET or modem. Some of the databases are also available on the World Wide Web. The structure and the scope of ENSDF are presented along with the on-line access system of the National Nuclear Data Center at Brookhaven National Laboratory.

  12. Log Truck-Weighing System

    NASA Technical Reports Server (NTRS)

    1977-01-01

    ELDEC Corp., Lynwood, Wash., built a weight-recording system for logging trucks based on electronic technology the company acquired as a subcontractor on space programs such as Apollo and the Saturn launch vehicle. ELDEC employed its space-derived expertise to develop a computerized weight-and-balance system for Lockheed's TriStar jetliner. ELDEC then adapted the airliner system to a similar product for logging trucks. Electronic equipment computes tractor weight, trailer weight and overall gross weight, and this information is presented to the driver by an instrument in the cab. The system costs $2,000 but it pays for itself in a single year. It allows operators to use a truck's hauling capacity more efficiently since the load can be maximized without exceeding legal weight limits for highway travel. Approximately 2,000 logging trucks now use the system.

  13. SU-E-T-325: The New Evaluation Method of the VMAT Plan Delivery Using Varian DynaLog Files and Modulation Complexity Score (MCS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tateoka, K; Graduate School of Medicine, Sapporo Medical University, Sapporo, JP; Fujimomo, K

    2014-06-01

    Purpose: The aim of the study is to evaluate the use of Varian DynaLog files to verify VMAT plans delivery and modulation complexity score (MCS) of VMAT. Methods: Delivery accuracy of machine performance was quantified by multileaf collimator (MLC) position errors, gantry angle errors and fluence delivery accuracy for volumetric modulated arc therapy (VMAT). The relationship between machine performance and plan complexity were also investigated using the modulation complexity score (MCS). Plan and Actual MLC positions, gantry angles and delivered fraction of monitor units were extracted from Varian DynaLog files. These factors were taken from the record and verify systemmore » of MLC control file. Planned and delivered beam data were compared to determine leaf position errors and gantry angle errors. Analysis was also performed on planned and actual fluence maps reconstructed from those of the DynaLog files. This analysis was performed for all treatment fractions of 5 prostate VMAT plans. The analysis of DynaLog files have been carried out by in-house programming in Visual C++. Results: The root mean square of leaf position and gantry angle errors were about 0.12 and 0.15, respectively. The Gamma of planned and actual fluence maps at 3%/3 mm criterion was about 99.21. The gamma of the leaf position errors were not directly related to plan complexity as determined by the MCS. Therefore, the gamma of the gantry angle errors were directly related to plan complexity as determined by the MCS. Conclusion: This study shows Varian dynalog files for VMAT plan can be diagnosed delivery errors not possible with phantom based quality assurance. Furthermore, the MCS of VMAT plan can evaluate delivery accuracy for patients receiving of VMAT. Machine performance was found to be directly related to plan complexity but this is not the dominant determinant of delivery accuracy.« less

  14. Geophysical log database for the Floridan aquifer system and southeastern Coastal Plain aquifer system in Florida and parts of Georgia, Alabama, and South Carolina

    USGS Publications Warehouse

    Williams, Lester J.; Raines, Jessica E.; Lanning, Amanda E.

    2013-04-04

    A database of borehole geophysical logs and other types of data files were compiled as part of ongoing studies of water availability and assessment of brackish- and saline-water resources. The database contains 4,883 logs from 1,248 wells in Florida, Georgia, Alabama, South Carolina, and from a limited number of offshore wells of the eastern Gulf of Mexico and the Atlantic Ocean. The logs can be accessed through a download directory organized by state and county for onshore wells and in a single directory for the offshore wells. A flat file database is provided that lists the wells, their coordinates, and the file listings.

  15. Teaching an Old Log New Tricks with Machine Learning.

    PubMed

    Schnell, Krista; Puri, Colin; Mahler, Paul; Dukatz, Carl

    2014-03-01

    To most people, the log file would not be considered an exciting area in technology today. However, these relatively benign, slowly growing data sources can drive large business transformations when combined with modern-day analytics. Accenture Technology Labs has built a new framework that helps to expand existing vendor solutions to create new methods of gaining insights from these benevolent information springs. This framework provides a systematic and effective machine-learning mechanism to understand, analyze, and visualize heterogeneous log files. These techniques enable an automated approach to analyzing log content in real time, learning relevant behaviors, and creating actionable insights applicable in traditionally reactive situations. Using this approach, companies can now tap into a wealth of knowledge residing in log file data that is currently being collected but underutilized because of its overwhelming variety and volume. By using log files as an important data input into the larger enterprise data supply chain, businesses have the opportunity to enhance their current operational log management solution and generate entirely new business insights-no longer limited to the realm of reactive IT management, but extending from proactive product improvement to defense from attacks. As we will discuss, this solution has immediate relevance in the telecommunications and security industries. However, the most forward-looking companies can take it even further. How? By thinking beyond the log file and applying the same machine-learning framework to other log file use cases (including logistics, social media, and consumer behavior) and any other transactional data source.

  16. A clinically observed discrepancy between image-based and log-based MLC positions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neal, Brian, E-mail: bpn2p@virginia.edu; Ahmed, Mahmoud; Kathuria, Kunal

    2016-06-15

    Purpose: To present a clinical case in which real-time intratreatment imaging identified an multileaf collimator (MLC) leaf to be consistently deviating from its programmed and logged position by >1 mm. Methods: An EPID-based exit-fluence dosimetry system designed to prevent gross delivery errors was used to capture cine during treatment images. The author serendipitously visually identified a suspected MLC leaf displacement that was not otherwise detected. The leaf position as recorded on the EPID images was measured and log-files were analyzed for the treatment in question, the prior day’s treatment, and for daily MLC test patterns acquired on those treatment days.more » Additional standard test patterns were used to quantify the leaf position. Results: Whereas the log-file reported no difference between planned and recorded positions, image-based measurements showed the leaf to be 1.3 ± 0.1 mm medial from the planned position. This offset was confirmed with the test pattern irradiations. Conclusions: It has been clinically observed that log-file derived leaf positions can differ from their actual position by >1 mm, and therefore cannot be considered to be the actual leaf positions. This cautions the use of log-based methods for MLC or patient quality assurance without independent confirmation of log integrity. Frequent verification of MLC positions through independent means is a necessary precondition to trust log-file records. Intratreatment EPID imaging provides a method to capture departures from MLC planned positions.« less

  17. SU-F-T-469: A Clinically Observed Discrepancy Between Image-Based and Log- Based MLC Position

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neal, B; Ahmed, M; Siebers, J

    2016-06-15

    Purpose: To present a clinical case which challenges the base assumption of log-file based QA, by showing that the actual position of a MLC leaf can suddenly deviate from its programmed and logged position by >1 mm as observed with real-time imaging. Methods: An EPID-based exit-fluence dosimetry system designed to prevent gross delivery errors was used in cine mode to capture portal images during treatment. Visual monitoring identified an anomalous MLC leaf pair gap not otherwise detected by the automatic position verification. The position of the erred leaf was measured on EPID images and log files were analyzed for themore » treatment in question, the prior day’s treatment, and for daily MLC test patterns acquired on those treatment days. Additional standard test patterns were used to quantify the leaf position. Results: Whereas the log file reported no difference between planned and recorded positions, image-based measurements showed the leaf to be 1.3±0.1 mm medial from the planned position. This offset was confirmed with the test pattern irradiations. Conclusion: It has been clinically observed that log-file derived leaf positions can differ from their actual positions by >1 mm, and therefore cannot be considered to be the actual leaf positions. This cautions the use of log-based methods for MLC or patient quality assurance without independent confirmation of log integrity. Frequent verification of MLC positions through independent means is a necessary precondition to trusting log file records. Intra-treatment EPID imaging provides a method to capture departures from MLC planned positions. Work was supported in part by Varian Medical Systems.« less

  18. DSSTOX MASTER STRUCTURE-INDEX FILE: SDF FILE AND DOCUMENTATION

    EPA Science Inventory

    The DSSTox Master Structure-Index File serves to consolidate, manage, and ensure quality and uniformity of the chemical and substance information spanning all DSSTox Structure Data Files, including those in development but not yet published separately on this website.

  19. SU-E-T-144: Effective Analysis of VMAT QA Generated Trajectory Log Files for Medical Accelerator Predictive Maintenance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Able, CM; Baydush, AH; Nguyen, C

    Purpose: To determine the effectiveness of SPC analysis for a model predictive maintenance process that uses accelerator generated parameter and performance data contained in trajectory log files. Methods: Each trajectory file is decoded and a total of 131 axes positions are recorded (collimator jaw position, gantry angle, each MLC, etc.). This raw data is processed and either axis positions are extracted at critical points during the delivery or positional change over time is used to determine axis velocity. The focus of our analysis is the accuracy, reproducibility and fidelity of each axis. A reference positional trace of the gantry andmore » each MLC is used as a motion baseline for cross correlation (CC) analysis. A total of 494 parameters (482 MLC related) were analyzed using Individual and Moving Range (I/MR) charts. The chart limits were calculated using a hybrid technique that included the use of the standard 3σ limits and parameter/system specifications. Synthetic errors/changes were introduced to determine the initial effectiveness of I/MR charts in detecting relevant changes in operating parameters. The magnitude of the synthetic errors/changes was based on: TG-142 and published analysis of VMAT delivery accuracy. Results: All errors introduced were detected. Synthetic positional errors of 2mm for collimator jaw and MLC carriage exceeded the chart limits. Gantry speed and each MLC speed are analyzed at two different points in the delivery. Simulated Gantry speed error (0.2 deg/sec) and MLC speed error (0.1 cm/sec) exceeded the speed chart limits. Gantry position error of 0.2 deg was detected by the CC maximum value charts. The MLC position error of 0.1 cm was detected by the CC maximum value location charts for every MLC. Conclusion: SPC I/MR evaluation of trajectory log file parameters may be effective in providing an early warning of performance degradation or component failure for medical accelerator systems.« less

  20. Adding Data Management Services to Parallel File Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandt, Scott

    2015-03-04

    The objective of this project, called DAMASC for “Data Management in Scientific Computing”, is to coalesce data management with parallel file system management to present a declarative interface to scientists for managing, querying, and analyzing extremely large data sets efficiently and predictably. Managing extremely large data sets is a key challenge of exascale computing. The overhead, energy, and cost of moving massive volumes of data demand designs where computation is close to storage. In current architectures, compute/analysis clusters access data in a physically separate parallel file system and largely leave it scientist to reduce data movement. Over the past decadesmore » the high-end computing community has adopted middleware with multiple layers of abstractions and specialized file formats such as NetCDF-4 and HDF5. These abstractions provide a limited set of high-level data processing functions, but have inherent functionality and performance limitations: middleware that provides access to the highly structured contents of scientific data files stored in the (unstructured) file systems can only optimize to the extent that file system interfaces permit; the highly structured formats of these files often impedes native file system performance optimizations. We are developing Damasc, an enhanced high-performance file system with native rich data management services. Damasc will enable efficient queries and updates over files stored in their native byte-stream format while retaining the inherent performance of file system data storage via declarative queries and updates over views of underlying files. Damasc has four key benefits for the development of data-intensive scientific code: (1) applications can use important data-management services, such as declarative queries, views, and provenance tracking, that are currently available only within database systems; (2) the use of these services becomes easier, as they are provided within a familiar file

  1. SU-E-T-261: Plan Quality Assurance of VMAT Using Fluence Images Reconstituted From Log-Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsuta, Y; Shimizu, E; Matsunaga, K

    2014-06-01

    Purpose: A successful VMAT plan delivery includes precise modulations of dose rate, gantry rotational and multi-leaf collimator (MLC) shapes. One of the main problem in the plan quality assurance is dosimetric errors associated with leaf-positional errors are difficult to analyze because they vary with MU delivered and leaf number. In this study, we calculated integrated fluence error image (IFEI) from log-files and evaluated plan quality in the area of all and individual MLC leaves scanned. Methods: The log-file reported the expected and actual position for inner 20 MLC leaves and the dose fraction every 0.25 seconds during prostate VMAT onmore » Elekta Synergy. These data were imported to in-house software that developed to calculate expected and actual fluence images from the difference of opposing leaf trajectories and dose fraction at each time. The IFEI was obtained by adding all of the absolute value of the difference between expected and actual fluence images corresponding. Results: In the area all MLC leaves scanned in the IFEI, the average and root mean square (rms) were 2.5 and 3.6 MU, the area of errors below 10, 5 and 3 MU were 98.5, 86.7 and 68.1 %, the 95 % of area was covered with less than error of 7.1 MU. In the area individual MLC leaves scanned in the IFEI, the average and rms value were 2.1 – 3.0 and 3.1 – 4.0 MU, the area of errors below 10, 5 and 3 MU were 97.6 – 99.5, 81.7 – 89.5 and 51.2 – 72.8 %, the 95 % of area was covered with less than error of 6.6 – 8.2 MU. Conclusion: The analysis of the IFEI reconstituted from log-file was provided detailed information about the delivery in the area of all and individual MLC leaves scanned.« less

  2. Cloud object store for checkpoints of high performance computing applications using decoupling middleware

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-04-19

    Cloud object storage is enabled for checkpoints of high performance computing applications using a middleware process. A plurality of files, such as checkpoint files, generated by a plurality of processes in a parallel computing system are stored by obtaining said plurality of files from said parallel computing system; converting said plurality of files to objects using a log structured file system middleware process; and providing said objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  3. Accessing files in an Internet: The Jade file system

    NASA Technical Reports Server (NTRS)

    Peterson, Larry L.; Rao, Herman C.

    1991-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  4. Accessing files in an internet - The Jade file system

    NASA Technical Reports Server (NTRS)

    Rao, Herman C.; Peterson, Larry L.

    1993-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  5. Stress wave sorting of red maple logs for structural quality

    Treesearch

    Xiping Wang; Robert J. Ross; David W. Green; Brian Brashaw; Karl Englund; Michael Wolcott

    2004-01-01

    Existing log grading procedures in the United States make only visual assessments of log quality. These procedures do not incorporate estimates of the modulus of elasticity (MOE) of logs. It is questionable whether the visual grading procedures currently used for logs adequately assess the potential quality of structural products manufactured from them, especially...

  6. Cloud object store for archive storage of high performance computing data using decoupling middleware

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-06-30

    Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  7. A Prototype Implementation of a Time Interval File Protection System in Linux

    DTIC Science & Technology

    2006-09-01

    when a user logs in, the /etc/ passwd file is read by the system to get the user’s home directory. The user’s login shell then changes the directory...and don. • Users can be added with the command: # useradd – m <username> • Set the password by: # passwd <username> • Make a copy of the

  8. The design and implementation of an automated system for logging clinical experiences using an anesthesia information management system.

    PubMed

    Simpao, Allan; Heitz, James W; McNulty, Stephen E; Chekemian, Beth; Brenn, B Randall; Epstein, Richard H

    2011-02-01

    Residents in anesthesia training programs throughout the world are required to document their clinical cases to help ensure that they receive adequate training. Current systems involve self-reporting, are subject to delayed updates and misreported data, and do not provide a practicable method of validation. Anesthesia information management systems (AIMS) are being used increasingly in training programs and are a logical source for verifiable documentation. We hypothesized that case logs generated automatically from an AIMS would be sufficiently accurate to replace the current manual process. We based our analysis on the data reporting requirements of the American College of Graduate Medical Education (ACGME). We conducted a systematic review of ACGME requirements and our AIMS record, and made modifications after identifying data element and attribution issues. We studied 2 methods (parsing of free text procedure descriptions and CPT4 procedure code mapping) to automatically determine ACGME case categories and generated AIMS-based case logs and compared these to assignments made by manual inspection of the anesthesia records. We also assessed under- and overreporting of cases entered manually by our residents into the ACGME website. The parsing and mapping methods assigned cases to a majority of the ACGME categories with accuracies of 95% and 97%, respectively, as compared with determinations made by 2 residents and 1 attending who manually reviewed all procedure descriptions. Comparison of AIMS-based case logs with reports from the ACGME Resident Case Log System website showed that >50% of residents either underreported or overreported their total case counts by at least 5%. The AIMS database is a source of contemporaneous documentation of resident experience that can be queried to generate valid, verifiable case logs. The extent of AIMS adoption by academic anesthesia departments should encourage accreditation organizations to support uploading of AIMS-based case

  9. Well logging evaporative thermal protection system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamers, M.D.; Martelli, V.P.

    1981-02-03

    An evaporative thermal protection system for use in hostile environment well logging applications, the system including a downhole thermal protection cartridge disposed within a well logging sonde or tool to keep a payload such as sensors and support electronics cool, the cartridge carrying either an active evaporative system for refrigeration or a passive evaporative system, both exhausting to the surface through an armored flexible fluidic communication mechanical cable.

  10. PDBToSDF: Create ligand structure files from PDB file.

    PubMed

    Muppalaneni, Naresh Babu; Rao, Allam Appa

    2011-01-01

    Protein Data Bank (PDB) file contains atomic data for protein and ligand in protein-ligand complexes. Structure data file (SDF) contains data for atoms, bonds, connectivity and coordinates of molecule for ligands. We describe PDBToSDF as a tool to separate the ligand data from pdb file for the calculation of ligand properties like molecular weight, number of hydrogen bond acceptors, hydrogen bond receptors easily.

  11. Log-Log Convexity of Type-Token Growth in Zipf's Systems

    NASA Astrophysics Data System (ADS)

    Font-Clos, Francesc; Corral, Álvaro

    2015-06-01

    It is traditionally assumed that Zipf's law implies the power-law growth of the number of different elements with the total number of elements in a system—the so-called Heaps' law. We show that a careful definition of Zipf's law leads to the violation of Heaps' law in random systems, with growth curves that have a convex shape in log-log scale. These curves fulfill universal data collapse that only depends on the value of Zipf's exponent. We observe that real books behave very much in the same way as random systems, despite the presence of burstiness in word occurrence. We advance an explanation for this unexpected correspondence.

  12. A File Archival System

    NASA Technical Reports Server (NTRS)

    Fanselow, J. L.; Vavrus, J. L.

    1984-01-01

    ARCH, file archival system for DEC VAX, provides for easy offline storage and retrieval of arbitrary files on DEC VAX system. System designed to eliminate situations that tie up disk space and lead to confusion when different programers develop different versions of same programs and associated files.

  13. Please Move Inactive Files Off the /projects File System | High-Performance

    Science.gov Websites

    Computing | NREL Please Move Inactive Files Off the /projects File System Please Move Inactive Files Off the /projects File System January 11, 2018 The /projects file system is a shared resource . This year this has created a space crunch - the file system is now about 90% full and we need your help

  14. Small-diameter log evaluation for value-added structural applications

    Treesearch

    Ronald Wolfe; Cassandra Moseley

    2000-01-01

    Three species of small-diameter logs from the Klamath/Siskiyou Mountains and the Cascade Range in southwest Oregon were tested for their potential for value-added structural applications. The logs were tested in bending and compression parallel to the grain. Strength and stiffness values were correlated to possible nondestructive evaluation grading parameters and...

  15. Dataset for forensic analysis of B-tree file system.

    PubMed

    Wani, Mohamad Ahtisham; Bhat, Wasim Ahmad

    2018-06-01

    Since B-tree file system (Btrfs) is set to become de facto standard file system on Linux (and Linux based) operating systems, Btrfs dataset for forensic analysis is of great interest and immense value to forensic community. This article presents a novel dataset for forensic analysis of Btrfs that was collected using a proposed data-recovery procedure. The dataset identifies various generalized and common file system layouts and operations, specific node-balancing mechanisms triggered, logical addresses of various data structures, on-disk records, recovered-data as directory entries and extent data from leaf and internal nodes, and percentage of data recovered.

  16. Aero/fluids database system

    NASA Technical Reports Server (NTRS)

    Reardon, John E.; Violett, Duane L., Jr.

    1991-01-01

    The AFAS Database System was developed to provide the basic structure of a comprehensive database system for the Marshall Space Flight Center (MSFC) Structures and Dynamics Laboratory Aerophysics Division. The system is intended to handle all of the Aerophysics Division Test Facilities as well as data from other sources. The system was written for the DEC VAX family of computers in FORTRAN-77 and utilizes the VMS indexed file system and screen management routines. Various aspects of the system are covered, including a description of the user interface, lists of all code structure elements, descriptions of the file structures, a description of the security system operation, a detailed description of the data retrieval tasks, a description of the session log, and a description of the archival system.

  17. Dynamic Non-Hierarchical File Systems for Exascale Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Darrell E.; Miller, Ethan L

    search appliances. These search applications are often optimized for a single file system, making it difficult to move files and their metadata between file systems. Users have tried to solve this problem in several ways, including the use of separate databases to index file properties, the encoding of file properties into file names, and separately gathering and managing provenance data, but none of these approaches has worked well, either due to limited usefulness or scalability, or both. Our research addressed several key issues: High-performance, real-time metadata harvesting: extracting important attributes from files dynamically and immediately updating indexes used to improve search; Transparent, automatic, and secure provenance capture: recording the data inputs and processing steps used in the production of each file in the system; Scalable indexing: indexes that are optimized for integration with the file system; Dynamic file system structure: our approach provides dynamic directories similar to those in semantic file systems, but these are the native organization rather than a feature grafted onto a conventional system. In addition to these goals, our research effort will include evaluating the impact of new storage technologies on the file system design and performance. In particular, the indexing and metadata harvesting functions can potentially benefit from the performance improvements promised by new storage class memories.« less

  18. 47 CFR 76.1706 - Signal leakage logs and repair records.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the probable cause of the leakage. The log shall be kept on file for a period of two years and shall... 47 Telecommunication 4 2010-10-01 2010-10-01 false Signal leakage logs and repair records. 76.1706... leakage logs and repair records. Cable operators shall maintain a log showing the date and location of...

  19. 47 CFR 76.1706 - Signal leakage logs and repair records.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the probable cause of the leakage. The log shall be kept on file for a period of two years and shall... 47 Telecommunication 4 2011-10-01 2011-10-01 false Signal leakage logs and repair records. 76.1706... leakage logs and repair records. Cable operators shall maintain a log showing the date and location of...

  20. 17 CFR 274.402 - Form ID, uniform application for access codes to file on EDGAR.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... for access codes to file on EDGAR. 274.402 Section 274.402 Commodity and Securities Exchanges... Forms for Electronic Filing § 274.402 Form ID, uniform application for access codes to file on EDGAR..., filing agent or training agent to log on to the EDGAR system, submit filings, and change its CCC. (d...

  1. 17 CFR 274.402 - Form ID, uniform application for access codes to file on EDGAR.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... for access codes to file on EDGAR. 274.402 Section 274.402 Commodity and Securities Exchanges... Forms for Electronic Filing § 274.402 Form ID, uniform application for access codes to file on EDGAR..., filing agent or training agent to log on to the EDGAR system, submit filings, and change its CCC. (d...

  2. The Added Value of Log File Analyses of the Use of a Personal Health Record for Patients With Type 2 Diabetes Mellitus

    PubMed Central

    Kelders, Saskia M.; Braakman-Jansen, Louise M. A.; van Gemert-Pijnen, Julia E. W. C.

    2014-01-01

    The electronic personal health record (PHR) is a promising technology for improving the quality of chronic disease management. Until now, evaluations of such systems have provided only little insight into why a particular outcome occurred. The aim of this study is to gain insight into the navigation process (what functionalities are used, and in what sequence) of e-Vita, a PHR for patients with type 2 diabetes mellitus (T2DM), to increase the efficiency of the system and improve the long-term adherence. Log data of the first visits in the first 6 weeks after the release of a renewed version of e-Vita were analyzed to identify the usage patterns that emerge when users explore a new application. After receiving the invitation, 28% of all registered users visited e-Vita. In total, 70 unique usage patterns could be identified. When users visited the education service first, 93% of all users ended their session. Most users visited either 1 or 5 or more services during their first session, but the distribution of the routes was diffuse. In conclusion, log file analyses can provide valuable prompts for improving the system design of a PHR. In this way, the match between the system and its users and the long-term adherence has the potential to increase. PMID:24876574

  3. The Jade File System. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rao, Herman Chung-Hwa

    1991-01-01

    File systems have long been the most important and most widely used form of shared permanent storage. File systems in traditional time-sharing systems, such as Unix, support a coherent sharing model for multiple users. Distributed file systems implement this sharing model in local area networks. However, most distributed file systems fail to scale from local area networks to an internet. Four characteristics of scalability were recognized: size, wide area, autonomy, and heterogeneity. Owing to size and wide area, techniques such as broadcasting, central control, and central resources, which are widely adopted by local area network file systems, are not adequate for an internet file system. An internet file system must also support the notion of autonomy because an internet is made up by a collection of independent organizations. Finally, heterogeneity is the nature of an internet file system, not only because of its size, but also because of the autonomy of the organizations in an internet. The Jade File System, which provides a uniform way to name and access files in the internet environment, is presented. Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Because of autonomy, Jade is designed under the restriction that the underlying file systems may not be modified. In order to avoid the complexity of maintaining an internet-wide, global name space, Jade permits each user to define a private name space. In Jade's design, we pay careful attention to avoiding unnecessary network messages between clients and file servers in order to achieve acceptable performance. Jade's name space supports two novel features: (1) it allows multiple file systems to be mounted under one direction; and (2) it permits one logical name space to mount other logical name spaces. A prototype of Jade was implemented to examine and validate its

  4. Detection of Anomalous Insiders in Collaborative Environments via Relational Analysis of Access Logs

    PubMed Central

    Chen, You; Malin, Bradley

    2014-01-01

    Collaborative information systems (CIS) are deployed within a diverse array of environments, ranging from the Internet to intelligence agencies to healthcare. It is increasingly the case that such systems are applied to manage sensitive information, making them targets for malicious insiders. While sophisticated security mechanisms have been developed to detect insider threats in various file systems, they are neither designed to model nor to monitor collaborative environments in which users function in dynamic teams with complex behavior. In this paper, we introduce a community-based anomaly detection system (CADS), an unsupervised learning framework to detect insider threats based on information recorded in the access logs of collaborative environments. CADS is based on the observation that typical users tend to form community structures, such that users with low a nity to such communities are indicative of anomalous and potentially illicit behavior. The model consists of two primary components: relational pattern extraction and anomaly detection. For relational pattern extraction, CADS infers community structures from CIS access logs, and subsequently derives communities, which serve as the CADS pattern core. CADS then uses a formal statistical model to measure the deviation of users from the inferred communities to predict which users are anomalies. To empirically evaluate the threat detection model, we perform an analysis with six months of access logs from a real electronic health record system in a large medical center, as well as a publicly-available dataset for replication purposes. The results illustrate that CADS can distinguish simulated anomalous users in the context of real user behavior with a high degree of certainty and with significant performance gains in comparison to several competing anomaly detection models. PMID:25485309

  5. 17 CFR 239.63 - Form ID, uniform application for access codes to file on EDGAR.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... for access codes to file on EDGAR. 239.63 Section 239.63 Commodity and Securities Exchanges SECURITIES... Statements § 239.63 Form ID, uniform application for access codes to file on EDGAR. Form ID must be filed by... log on to the EDGAR system, submit filings, and change its CCC. (d) Password Modification...

  6. 17 CFR 239.63 - Form ID, uniform application for access codes to file on EDGAR.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... for access codes to file on EDGAR. 239.63 Section 239.63 Commodity and Securities Exchanges SECURITIES... Statements § 239.63 Form ID, uniform application for access codes to file on EDGAR. Form ID must be filed by... log on to the EDGAR system, submit filings, and change its CCC. (d) Password Modification...

  7. The Global File System

    NASA Technical Reports Server (NTRS)

    Soltis, Steven R.; Ruwart, Thomas M.; OKeefe, Matthew T.

    1996-01-01

    The global file system (GFS) is a prototype design for a distributed file system in which cluster nodes physically share storage devices connected via a network-like fiber channel. Networks and network-attached storage devices have advanced to a level of performance and extensibility so that the previous disadvantages of shared disk architectures are no longer valid. This shared storage architecture attempts to exploit the sophistication of storage device technologies whereas a server architecture diminishes a device's role to that of a simple component. GFS distributes the file system responsibilities across processing nodes, storage across the devices, and file system resources across the entire storage pool. GFS caches data on the storage devices instead of the main memories of the machines. Consistency is established by using a locking mechanism maintained by the storage devices to facilitate atomic read-modify-write operations. The locking mechanism is being prototyped in the Silicon Graphics IRIX operating system and is accessed using standard Unix commands and modules.

  8. Singular and interactive effects of blowdown, salvage logging, and wildfire in sub-boreal pine systems

    USGS Publications Warehouse

    D'Amato, A.W.; Fraver, S.; Palik, B.J.; Bradford, J.B.; Patty, L.

    2011-01-01

    The role of disturbance in structuring vegetation is widely recognized; however, we are only beginning to understand the effects of multiple interacting disturbances on ecosystem recovery and development. Of particular interest is the impact of post-disturbance management interventions, particularly in light of the global controversy surrounding the effects of salvage logging on forest ecosystem recovery. Studies of salvage logging impacts have focused on the effects of post-disturbance salvage logging within the context of a single natural disturbance event. There have been no formal evaluations of how these effects may differ when followed in short sequence by a second, high severity natural disturbance. To evaluate the impact of this management practice within the context of multiple disturbances, we examined the structural and woody plant community responses of sub-boreal Pinus banksiana systems to a rapid sequence of disturbances. Specifically, we compared responses to Blowdown (B), Fire (F), Blowdown-Fire, and Blowdown-Salvage-Fire (BSF) and compared these to undisturbed control (C) stands. Comparisons between BF and BSF indicated that the primary effect of salvage logging was a decrease in the abundance of structural legacies, such as downed woody debris and snags. Both of these compound disturbance sequences (BF and BSF), resulted in similar woody plant communities, largely dominated by Populus tremuloides; however, there was greater homogeneity in community composition in salvage logged areas. Areas experiencing solely fire (F stands) were dominated by P. banksiana regeneration, and blowdown areas (B stands) were largely characterized by regeneration from shade tolerant conifer species. Our results suggest that salvage logging impacts on woody plant communities are diminished when followed by a second high severity disturbance; however, impacts on structural legacies persist. Provisions for the retention of snags, downed logs, and surviving trees as part

  9. Technoeconomic analysis of conventional logging systems operating from stump to landing

    Treesearch

    Raymond L. Sarles; William G. Luppold; William G. Luppold

    1986-01-01

    Analyzes technical and economic factors for six conventional logging systems suitable for operation in eastern forests. Discusses financial risks and business implications for loggers investing in high-production, state-of-the-art logging systems. Provides logging contractors with information useful as a preliminary guide for selection of equipment and systems....

  10. Deceit: A flexible distributed file system

    NASA Technical Reports Server (NTRS)

    Siegel, Alex; Birman, Kenneth; Marzullo, Keith

    1989-01-01

    Deceit, a distributed file system (DFS) being developed at Cornell, focuses on flexible file semantics in relation to efficiency, scalability, and reliability. Deceit servers are interchangeable and collectively provide the illusion of a single, large server machine to any clients of the Deceit service. Non-volatile replicas of each file are stored on a subset of the file servers. The user is able to set parameters on a file to achieve different levels of availability, performance, and one-copy serializability. Deceit also supports a file version control mechanism. In contrast with many recent DFS efforts, Deceit can behave like a plain Sun Network File System (NFS) server and can be used by any NFS client without modifying any client software. The current Deceit prototype uses the ISIS Distributed Programming Environment for all communication and process group management, an approach that reduces system complexity and increases system robustness.

  11. Use of treatment log files in spot scanning proton therapy as part of patient-specific quality assurance

    PubMed Central

    Li, Heng; Sahoo, Narayan; Poenisch, Falk; Suzuki, Kazumichi; Li, Yupeng; Li, Xiaoqiang; Zhang, Xiaodong; Lee, Andrew K.; Gillin, Michael T.; Zhu, X. Ronald

    2013-01-01

    to the patient based on the plan and recorded data was within 2%. Conclusions: The authors have shown that the treatment log file in a spot scanning proton beam delivery system is precise enough to serve as a quality assurance tool to monitor variation in spot position and MU value, as well as the delivered dose uncertainty from the treatment delivery system. The analysis tool developed here could be useful for assessing spot position uncertainty and thus dose uncertainty for any patient receiving spot scanning proton beam therapy. PMID:23387726

  12. Development of Cross-Platform Software for Well Logging Data Visualization

    NASA Astrophysics Data System (ADS)

    Akhmadulin, R. K.; Miraev, A. I.

    2017-07-01

    Well logging data processing is one of the main sources of information in the oil-gas field analysis and is of great importance in the process of its development and operation. Therefore, it is important to have the software which would accurately and clearly provide the user with processed data in the form of well logs. In this work, there have been developed a software product which not only has the basic functionality for this task (loading data from .las files, well log curves display, etc.), but can be run in different operating systems and on different devices. In the article a subject field analysis and task formulation have been performed, and the software design stage has been considered. At the end of the work the resulting software product interface has been described.

  13. DIY Soundcard Based Temperature Logging System. Part II: Applications

    ERIC Educational Resources Information Center

    Nunn, John

    2016-01-01

    This paper demonstrates some simple applications of how temperature logging systems may be used to monitor simple heat experiments, and how the data obtained can be analysed to get some additional insight into the physical processes. [For "DIY Soundcard Based Temperature Logging System. Part I: Design," see EJ1114124.

  14. Effects of scale and logging on landscape structure in a forest mosaic.

    PubMed

    Leimgruber, P; McShea, W J; Schnell, G D

    2002-03-01

    Landscape structure in a forest mosaic changes with spatial scale (i.e. spatial extent) and thresholds may occur where structure changes markedly. Forest management alters landscape structure and may affect the intensity and location of thresholds. Our purpose was to examine landscape structure at different scales to determine thresholds where landscape structure changes markedly in managed forest mosaics of the Appalachian Mountains in the eastern United States. We also investigated how logging influences landscape structure and whether these management activities change threshold values. Using threshold and autocorrelation analyses, we found that thresholds in landscape indices exist at 400, 500, and 800 m intervals from the outer edge of management units in our study region. For landscape indices that consider all landcover categories, such as dominance and contagion, landscape structure and thresholds did not change after logging occurred. Measurements for these overall landscape indices were strongly influenced by midsuccessional deciduous forest, the most common landcover category in the landscape. When restricting analyses for mean patch size and percent cover to individual forest types, thresholds for early-successional forests changed after logging. However, logging changed the landscape structure at small spatial scale, but did not alter the structure of the entire forest mosaic. Previous forest management may already have increased the heterogeneity of the landscape beyond the point where additional small cuts alter the overall structure of the forest. Because measurements for landscape indices yield very different results at different spatial scales, it is important first to identify thresholds in order to determine the appropriate scales for landscape ecological studies. We found that threshold and autocorrelation analyses were simple but powerful tools for the detection of appropriate scales in the managed forest mosaic under study.

  15. Selective logging: does the imprint remain on tree structure and composition after 45 years?

    PubMed

    Osazuwa-Peters, Oyomoare L; Chapman, Colin A; Zanne, Amy E

    2015-01-01

    Selective logging of tropical forests is increasing in extent and intensity. The duration over which impacts of selective logging persist, however, remains an unresolved question, particularly for African forests. Here, we investigate the extent to which a past selective logging event continues to leave its imprint on different components of an East African forest 45 years later. We inventoried 2358 stems ≥10 cm in diameter in 26 plots (200 m × 10 m) within a 5.2 ha area in Kibale National Park, Uganda, in logged and unlogged forest. In these surveys, we characterized the forest light environment, taxonomic composition, functional trait composition using three traits (wood density, maximum height and maximum diameter) and forest structure based on three measures (stem density, total basal area and total above-ground biomass). In comparison to unlogged forests, selectively logged forest plots in Kibale National Park on average had higher light levels, different structure characterized by lower stem density, lower total basal area and lower above-ground biomass, and a distinct taxonomic composition driven primarily by changes in the relative abundance of species. Conversely, selectively logged forest plots were like unlogged plots in functional composition, having similar community-weighted mean values for wood density, maximum height and maximum diameter. This similarity in functional composition irrespective of logging history may be due to functional recovery of logged forest or background changes in functional attributes of unlogged forest. Despite the passage of 45 years, the legacy of selective logging on the tree community in Kibale National Park is still evident, as indicated by distinct taxonomic and structural composition and reduced carbon storage in logged forest compared with unlogged forest. The effects of selective logging are exerted via influences on tree demography rather than functional trait composition.

  16. Selective logging: does the imprint remain on tree structure and composition after 45 years?

    PubMed Central

    Osazuwa-Peters, Oyomoare L.; Chapman, Colin A.; Zanne, Amy E.

    2015-01-01

    Selective logging of tropical forests is increasing in extent and intensity. The duration over which impacts of selective logging persist, however, remains an unresolved question, particularly for African forests. Here, we investigate the extent to which a past selective logging event continues to leave its imprint on different components of an East African forest 45 years later. We inventoried 2358 stems ≥10 cm in diameter in 26 plots (200 m × 10 m) within a 5.2 ha area in Kibale National Park, Uganda, in logged and unlogged forest. In these surveys, we characterized the forest light environment, taxonomic composition, functional trait composition using three traits (wood density, maximum height and maximum diameter) and forest structure based on three measures (stem density, total basal area and total above-ground biomass). In comparison to unlogged forests, selectively logged forest plots in Kibale National Park on average had higher light levels, different structure characterized by lower stem density, lower total basal area and lower above-ground biomass, and a distinct taxonomic composition driven primarily by changes in the relative abundance of species. Conversely, selectively logged forest plots were like unlogged plots in functional composition, having similar community-weighted mean values for wood density, maximum height and maximum diameter. This similarity in functional composition irrespective of logging history may be due to functional recovery of logged forest or background changes in functional attributes of unlogged forest. Despite the passage of 45 years, the legacy of selective logging on the tree community in Kibale National Park is still evident, as indicated by distinct taxonomic and structural composition and reduced carbon storage in logged forest compared with unlogged forest. The effects of selective logging are exerted via influences on tree demography rather than functional trait composition. PMID:27293697

  17. Virtual file system for PSDS

    NASA Technical Reports Server (NTRS)

    Runnels, Tyson D.

    1993-01-01

    This is a case study. It deals with the use of a 'virtual file system' (VFS) for Boeing's UNIX-based Product Standards Data System (PSDS). One of the objectives of PSDS is to store digital standards documents. The file-storage requirements are that the files must be rapidly accessible, stored for long periods of time - as though they were paper, protected from disaster, and accumulative to about 80 billion characters (80 gigabytes). This volume of data will be approached in the first two years of the project's operation. The approach chosen is to install a hierarchical file migration system using optical disk cartridges. Files are migrated from high-performance media to lower performance optical media based on a least-frequency-used algorithm. The optical media are less expensive per character stored and are removable. Vital statistics about the removable optical disk cartridges are maintained in a database. The assembly of hardware and software acts as a single virtual file system transparent to the PSDS user. The files are copied to 'backup-and-recover' media whose vital statistics are also stored in the database. Seventeen months into operation, PSDS is storing 49 gigabytes. A number of operational and performance problems were overcome. Costs are under control. New and/or alternative uses for the VFS are being considered.

  18. A Multi-temporal Analysis of Logging Impacts on Tropical Forest Structure Using Airborne Lidar Data

    NASA Astrophysics Data System (ADS)

    Keller, M. M.; Pinagé, E. R.; Duffy, P.; Longo, M.; dos-Santos, M. N.; Leitold, V.; Morton, D. C.

    2017-12-01

    The long-term impacts of selective logging on carbon cycling and ecosystem function in tropical-forests are still uncertain. Despite improvements in selective logging detection using satellite data, quantifying changes in forest structure from logging and recovery following logging is difficult using orbital data. We analyzed the dynamics of forest structure comparing logged and unlogged forests in the Eastern Brazilian Amazon (Paragominas Municipality, Pará State) using small footprint discrete return airborne lidar data acquired in 2012 and 2014. Logging operations were conducted at the 1200 ha study site from 2006 through 2013 using reduced impact logging techniques—management practices that minimize canopy and ground damage compared to more common conventional logging. Nevertheless, logging still reduced aboveground biomass by 10% to 20% in logged areas compared to intact forests. We aggregated lidar point-cloud data at spatial scales ranging from 50 m to 250 m and developed a binomial classification model based on the height distribution of lidar returns in 2012 and validated the model against the 2014 lidar acquisition. We accurately classified intact and logged forest classes compared with field data. Classification performance improved as spatial resolution increased (AUC = 0.974 at 250 m). We analyzed the differences in canopy gaps, understory damage (based on a relative density model), and biomass (estimated from total canopy height) of intact and logged classes. As expected, logging greatly increased both canopy gap formation and understory damage. However, while the area identified as canopy gap persisted for at least 8 years (from the oldest logging treatments in 2006 to the most recent lidar acquisition in 2014), the effects of ground damage were mostly erased by vigorous understory regrowth after about 5 years. The rate of new gap formation was 6 to 7 times greater in recently logged forests compared to undisturbed forests. New gaps opened at a

  19. Test-bench system for a borehole azimuthal acoustic reflection imaging logging tool

    NASA Astrophysics Data System (ADS)

    Liu, Xianping; Ju, Xiaodong; Qiao, Wenxiao; Lu, Junqiang; Men, Baiyong; Liu, Dong

    2016-06-01

    The borehole azimuthal acoustic reflection imaging logging tool (BAAR) is a new generation of imaging logging tool, which is able to investigate stratums in a relatively larger range of space around the borehole. The BAAR is designed based on the idea of modularization with a very complex structure, so it has become urgent for us to develop a dedicated test-bench system to debug each module of the BAAR. With the help of a test-bench system introduced in this paper, test and calibration of BAAR can be easily achieved. The test-bench system is designed based on the client/server model. The hardware system mainly consists of a host computer, an embedded controlling board, a bus interface board, a data acquisition board and a telemetry communication board. The host computer serves as the human machine interface and processes the uploaded data. The software running on the host computer is designed based on VC++. The embedded controlling board uses Advanced Reduced Instruction Set Machines 7 (ARM7) as the micro controller and communicates with the host computer via Ethernet. The software for the embedded controlling board is developed based on the operating system uClinux. The bus interface board, data acquisition board and telemetry communication board are designed based on a field programmable gate array (FPGA) and provide test interfaces for the logging tool. To examine the feasibility of the test-bench system, it was set up to perform a test on BAAR. By analyzing the test results, an unqualified channel of the electronic receiving cabin was discovered. It is suggested that the test-bench system can be used to quickly determine the working condition of sub modules of BAAR and it is of great significance in improving production efficiency and accelerating industrial production of the logging tool.

  20. A Filing System for Medical Literature

    PubMed Central

    Cumming, Millie

    1988-01-01

    The author reviews the types of systems available for personal literature files and makes specific recommendations for filing systems for family physicians. A personal filing system can be an integral part of family practice, and need not require time out of proportion to the worth of the system. Because it is a personal system, different types will suit different users; some systems, however, are more reliable than others for use in family practice. (Can Fam Physician 1988; 34:425-433.) PMID:21253062

  1. Design and Evaluation of Log-To-Dimension Manufacturing Systems Using System Simulation

    Treesearch

    Wenjie Lin; D. Earl Kline; Philip A. Araman; Janice K. Wiedenbeck

    1995-01-01

    In a recent study of alternative dimension manufacturing systems that produce green hardwood dimension directly fromlogs, it was observed that for Grade 2 and 3 red oak logs, up to 78 and 76 percent of the log scale volume could be converted into clear dimension parts. The potential high yields suggest that this processing system can be a promising technique for...

  2. Landscape-scale changes in forest canopy structure across a partially logged tropical peat swamp

    NASA Astrophysics Data System (ADS)

    Wedeux, B. M. M.; Coomes, D. A.

    2015-11-01

    Forest canopy structure is strongly influenced by environmental factors and disturbance, and in turn influences key ecosystem processes including productivity, evapotranspiration and habitat availability. In tropical forests increasingly modified by human activities, the interplay between environmental factors and disturbance legacies on forest canopy structure across landscapes is practically unexplored. We used airborne laser scanning (ALS) data to measure the canopy of old-growth and selectively logged peat swamp forest across a peat dome in Central Kalimantan, Indonesia, and quantified how canopy structure metrics varied with peat depth and under logging. Several million canopy gaps in different height cross-sections of the canopy were measured in 100 plots of 1 km2 spanning the peat dome, allowing us to describe canopy structure with seven metrics. Old-growth forest became shorter and had simpler vertical canopy profiles on deeper peat, consistent with previous work linking deep peat to stunted tree growth. Gap size frequency distributions (GSFDs) indicated fewer and smaller canopy gaps on the deeper peat (i.e. the scaling exponent of Pareto functions increased from 1.76 to 3.76 with peat depth). Areas subjected to concessionary logging until 2000, and illegal logging since then, had the same canopy top height as old-growth forest, indicating the persistence of some large trees, but mean canopy height was significantly reduced. With logging, the total area of canopy gaps increased and the GSFD scaling exponent was reduced. Logging effects were most evident on the deepest peat, where nutrient depletion and waterlogged conditions restrain tree growth and recovery. A tight relationship exists between canopy structure and peat depth gradient within the old-growth tropical peat swamp forest. This relationship breaks down after selective logging, with canopy structural recovery, as observed by ALS, modulated by environmental conditions. These findings improve our

  3. DMFS: A Data Migration File System for NetBSD

    NASA Technical Reports Server (NTRS)

    Studenmund, William

    1999-01-01

    I have recently developed dmfs, a Data Migration File System, for NetBSD. This file system is based on the overlay file system, which is discussed in a separate paper, and provides kernel support for the data migration system being developed by my research group here at NASA/Ames. The file system utilizes an underlying file store to provide the file backing, and coordinates user and system access to the files. It stores its internal meta data in a flat file, which resides on a separate file system. Our data migration system provides archiving and file migration services. System utilities scan the dmfs file system for recently modified files, and archive them to two separate tape stores. Once a file has been doubly archived, files larger than a specified size will be truncated to that size, potentially freeing up large amounts of the underlying file store. Some sites will choose to retain none of the file (deleting its contents entirely from the file system) while others may choose to retain a portion, for instance a preamble describing the remainder of the file. The dmfs layer coordinates access to the file, retaining user-perceived access and modification times, file size, and restricting access to partially migrated files to the portion actually resident. When a user process attempts to read from the non-resident portion of a file, it is blocked and the dmfs layer sends a request to a system daemon to restore the file. As more of the file becomes resident, the user process is permitted to begin accessing the now-resident portions of the file. For simplicity, our data migration system divides a file into two portions, a resident portion followed by an optional non-resident portion. Also, a file is in one of three states: fully resident, fully resident and archived, and (partially) non-resident and archived. For a file which is only partially resident, any attempt to write or truncate the file, or to read a non-resident portion, will trigger a file restoration

  4. Landscape-scale changes in forest canopy structure across a partially logged tropical peat swamp

    NASA Astrophysics Data System (ADS)

    Wedeux, B. M. M.; Coomes, D. A.

    2015-07-01

    Forest canopy structure is strongly influenced by environmental factors and disturbance, and in turn influences key ecosystem processes including productivity, evapotranspiration and habitat availability. In tropical forests increasingly modified by human activities, the interplaying effects of environmental factors and disturbance legacies on forest canopy structure across landscapes are practically unexplored. We used high-fidelity airborne laser scanning (ALS) data to measure the canopy of old-growth and selectively logged peat swamp forest across a peat dome in Central Kalimantan, Indonesia, and quantified how canopy structure metrics varied with peat depth and under logging. Several million canopy gaps in different height cross-sections of the canopy were measured in 100 plots of 1 km2 spanning the peat dome, allowing us to describe canopy structure with seven metrics. Old-growth forest became shorter and had simpler vertical canopy profiles on deeper peat, consistently with previous work linking deep peat to stunted tree growth. Gap Size Frequency Distributions (GSFDs) indicated fewer and smaller canopy gaps on the deeper peat (i.e. the scaling exponent of pareto functions increased from 1.76 to 3.76 with peat depth). Areas subjected to concessionary logging until 2000, and informal logging since then, had the same canopy top height as old-growth forest, indicating the persistence of some large trees, but mean canopy height was significantly reduced; the total area of canopy gaps increased and the GSFD scaling exponent was reduced. Logging effects were most evident on the deepest peat, where nutrient depletion and waterlogged conditions restrain tree growth and recovery. A tight relationship exists between canopy structure and the peat deph gradient within the old-growth tropical peat swamp. This relationship breaks down after selective logging, with canopy structural recovery being modulated by environmental conditions.

  5. MAIL LOG, program theory, volume 2

    NASA Technical Reports Server (NTRS)

    Harris, D. K.

    1979-01-01

    Information relevant to the MAIL LOG program theory is documented. The L-files for mail correspondence, design information release/report, and the drawing/engineering order are given. In addition, sources for miscellaneous external routines and special support routines are documented along with a glossary of terms.

  6. Apically extruded dentin debris by reciprocating single-file and multi-file rotary system.

    PubMed

    De-Deus, Gustavo; Neves, Aline; Silva, Emmanuel João; Mendonça, Thais Accorsi; Lourenço, Caroline; Calixto, Camila; Lima, Edson Jorge Moreira

    2015-03-01

    This study aims to evaluate the apical extrusion of debris by the two reciprocating single-file systems: WaveOne and Reciproc. Conventional multi-file rotary system was used as a reference for comparison. The hypotheses tested were (i) the reciprocating single-file systems extrude more than conventional multi-file rotary system and (ii) the reciprocating single-file systems extrude similar amounts of dentin debris. After solid selection criteria, 80 mesial roots of lower molars were included in the present study. The use of four different instrumentation techniques resulted in four groups (n = 20): G1 (hand-file technique), G2 (ProTaper), G3 (WaveOne), and G4 (Reciproc). The apparatus used to evaluate the collection of apically extruded debris was typical double-chamber collector. Statistical analysis was performed for multiple comparisons. No significant difference was found in the amount of the debris extruded between the two reciprocating systems. In contrast, conventional multi-file rotary system group extruded significantly more debris than both reciprocating groups. Hand instrumentation group extruded significantly more debris than all other groups. The present results yielded favorable input for both reciprocation single-file systems, inasmuch as they showed an improved control of apically extruded debris. Apical extrusion of debris has been studied extensively because of its clinical relevance, particularly since it may cause flare-ups, originated by the introduction of bacteria, pulpal tissue, and irrigating solutions into the periapical tissues.

  7. 17 CFR 259.602 - Form ID, uniform application for access codes to file on EDGAR.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...)—allows a filer, filing agent or training agent to log on to the EDGAR system, submit filings, and change... agent to change its Password. [69 FR 22710, Apr. 26, 2004] Editorial Note: For Federal Register... section of the printed volume and on GPO Access. ...

  8. Improving Website Hyperlink Structure Using Server Logs

    PubMed Central

    Paranjape, Ashwin; West, Robert; Zia, Leila; Leskovec, Jure

    2016-01-01

    Good websites should be easy to navigate via hyperlinks, yet maintaining a high-quality link structure is difficult. Identifying pairs of pages that should be linked may be hard for human editors, especially if the site is large and changes frequently. Further, given a set of useful link candidates, the task of incorporating them into the site can be expensive, since it typically involves humans editing pages. In the light of these challenges, it is desirable to develop data-driven methods for automating the link placement task. Here we develop an approach for automatically finding useful hyperlinks to add to a website. We show that passively collected server logs, beyond telling us which existing links are useful, also contain implicit signals indicating which nonexistent links would be useful if they were to be introduced. We leverage these signals to model the future usefulness of yet nonexistent links. Based on our model, we define the problem of link placement under budget constraints and propose an efficient algorithm for solving it. We demonstrate the effectiveness of our approach by evaluating it on Wikipedia, a large website for which we have access to both server logs (used for finding useful new links) and the complete revision history (containing a ground truth of new links). As our method is based exclusively on standard server logs, it may also be applied to any other website, as we show with the example of the biomedical research site Simtk. PMID:28345077

  9. SedMob: A mobile application for creating sedimentary logs in the field

    NASA Astrophysics Data System (ADS)

    Wolniewicz, Pawel

    2014-05-01

    SedMob is an open-source, mobile software package for creating sedimentary logs, targeted for use in tablets and smartphones. The user can create an unlimited number of logs, save data from each bed in the log as well as export and synchronize the data with a remote server. SedMob is designed as a mobile interface to SedLog: a free multiplatform package for drawing graphic logs that runs on PC computers. Data entered into SedMob are saved in the CSV file format, fully compatible with SedLog.

  10. 17 CFR 259.602 - Form ID, uniform application for access codes to file on EDGAR.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...)—allows a filer, filing agent or training agent to log on to the EDGAR system, submit filings, and change... agent to change its Password. [69 FR 22710, Apr. 26, 2004] Editorial Note: For Federal Register... section of the printed volume and at at www.fdsys.gov. ...

  11. Comparison between moving and stationary transmitter systems in induction logging

    NASA Astrophysics Data System (ADS)

    Poddar, M.; Caleb Dhanasekaran, P.; Prabhakar Rao, K.

    1985-09-01

    In a general treatment of the theory of induction logging, an exact integral representation has been obtained for the mutual impedance between a vertical dipole transmitter and a coaxial dipole receiver in a three layered earth. Based on this representation, a computer model has been devised using the traditional Slingram system of induction logging and the comparatively new Turam system, ignoring borehole effects. The model results indicate that due to its much larger response, the Turam system is in general preferable to the Slingram in mineral and groundwater investigations where formation conductivity much less than 1 S/m is generally encountered. However, if the surrounding media are conductive (more than 0.1 S/m), the Turam system suffers from large amplitude attenuation and phase rotation of the primary field caused by the conductive surrounding, and is less useful than the Slingram system which does not so suffer, unless the target bed is shallow. Because it is a more complex function of system parameters than the corresponding Slingram log, a Turam log can be conveniently interpreted only by the modern inverse method using a fast algorithm for the forward solution and a high speed digital computer.

  12. Small file aggregation in a parallel computing system

    DOEpatents

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Zhang, Jingwang

    2014-09-02

    Techniques are provided for small file aggregation in a parallel computing system. An exemplary method for storing a plurality of files generated by a plurality of processes in a parallel computing system comprises aggregating the plurality of files into a single aggregated file; and generating metadata for the single aggregated file. The metadata comprises an offset and a length of each of the plurality of files in the single aggregated file. The metadata can be used to unpack one or more of the files from the single aggregated file.

  13. Improving File System Performance by Striping

    NASA Technical Reports Server (NTRS)

    Lam, Terance L.; Kutler, Paul (Technical Monitor)

    1998-01-01

    This document discusses the performance and advantages of striped file systems on the SGI AD workstations. Performance of several striped file system configurations are compared and guidelines for optimal striping are recommended.

  14. SU-F-T-230: A Simple Method to Assess Accuracy of Dynamic Wave Arc Irradiation Using An Electronic Portal Imaging Device and Log Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirashima, H; Miyabe, Y; Yokota, K

    2016-06-15

    Purpose: The Dynamic Wave Arc (DWA) technique, where the multi-leaf collimator (MLC) and gantry/ring move simultaneously in a predefined non-coplanar trajectory, has been developed on the Vero4DRT. The aim of this study is to develop a simple method for quality assurance of DWA delivery using an electronic portal imaging device (EPID) measurements and log files analysis. Methods: The Vero4DRT has an EPID on the beam axis, the resolution of which is 0.18 mm/pixel at the isocenter plane. EPID images were acquired automatically. To verify the detection accuracy of the MLC position by EPID images, the MLC position with intentional errorsmore » was assessed. Tests were designed considering three factors: (1) accuracy of the MLC position (2) dose output consistency with variable dose rate (160–400 MU/min), gantry speed (2.4–6°/s), ring speed (0.5–2.5°/s), and (3) MLC speed (1.6–4.2 cm/s). All the patterns were delivered to the EPID and compared with those obtained with a stationary radiation beam with a 0° gantry angle. The irradiation log, including the MLC position and gantry/ring angle, were recorded simultaneously. To perform independent checks of the machine accuracy, the MLC position and gantry/ring angle position were assessed using log files. Results: 0.1 mm intentional error can be detected by the EPID, which is smaller than the EPID pixel size. The dose outputs with different conditions of the dose rate and gantry/ring speed and MLC speed showed good agreement, with a root mean square (RMS) error of 0.76%. The RMS error between the detected and recorded data were 0.1 mm for the MLC position, 0.12° for the gantry angle, and 0.07° for the ring angle. Conclusion: The MLC position and dose outputs in variable conditions during DWA irradiation can be easily detected using EPID measurements and log file analysis. The proposed method is useful for routine verification. This research is (partially) supported by the Practical Research for Innovative

  15. Storing files in a parallel computing system using list-based index to identify replica files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy

    Improved techniques are provided for storing files in a parallel computing system using a list-based index to identify file replicas. A file and at least one replica of the file are stored in one or more storage nodes of the parallel computing system. An index for the file comprises at least one list comprising a pointer to a storage location of the file and a storage location of the at least one replica of the file. The file comprises one or more of a complete file and one or more sub-files. The index may also comprise a checksum value formore » one or more of the file and the replica(s) of the file. The checksum value can be evaluated to validate the file and/or the file replica(s). A query can be processed using the list.« less

  16. 17 CFR 249.446 - Form ID, uniform application for access codes to file on EDGAR.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... log on to the EDGAR system, submit filings, and change its CCC. (d) Password Modification Authorization Code (PMAC)—allows a filer, filing agent or training agent to change its Password. [69 FR 22710... Sections Affected, which appears in the Finding Aids section of the printed volume and on GPO Access. ...

  17. Silvabase: A flexible data file management system

    NASA Technical Reports Server (NTRS)

    Lambing, Steven J.; Reynolds, Sandra J.

    1991-01-01

    The need for a more flexible and efficient data file management system for mission planning in the Mission Operations Laboratory (EO) at MSFC has spawned the development of Silvabase. Silvabase is a new data file structure based on a B+ tree data structure. This data organization allows for efficient forward and backward sequential reads, random searches, and appends to existing data. It also provides random insertions and deletions with reasonable efficiency, utilization of storage space well but not at the expense of speed, and performance of these functions on a large volume of data. Mission planners required that some data be keyed and manipulated in ways not found in a commercial product. Mission planning software is currently being converted to use Silvabase in the Spacelab and Space Station Mission Planning Systems. Silvabase runs on a Digital Equipment Corporation's popular VAX/VMS computers in VAX Fortran. Silvabase has unique features involving time histories and intervals such as in operations research. Because of its flexibility and unique capabilities, Silvabase could be used in almost any government or commercial application that requires efficient reads, searches, and appends in medium to large amounts of almost any kind of data.

  18. DMFS: A Data Migration File System for NetBSD

    NASA Technical Reports Server (NTRS)

    Studenmund, William

    2000-01-01

    I have recently developed DMFS, a Data Migration File System, for NetBSD. This file system provides kernel support for the data migration system being developed by my research group at NASA/Ames. The file system utilizes an underlying file store to provide the file backing, and coordinates user and system access to the files. It stores its internal metadata in a flat file, which resides on a separate file system. This paper will first describe our data migration system to provide a context for DMFS, then it will describe DMFS. It also will describe the changes to NetBSD needed to make DMFS work. Then it will give an overview of the file archival and restoration procedures, and describe how some typical user actions are modified by DMFS. Lastly, the paper will present simple performance measurements which indicate that there is little performance loss due to the use of the DMFS layer.

  19. MAIL LOG, program theory, volume 1. [Scout project automatic data system

    NASA Technical Reports Server (NTRS)

    Harris, D. K.

    1979-01-01

    The program theory used to obtain the software package, MAIL LOG, developed for the Scout Project Automatic Data System, SPADS, is described. The program is written in FORTRAN for the PRIME 300 computer system. The MAIL LOG data base consists of three main subfiles: (1) incoming and outgoing mail correspondence; (2) design information releases and reports; and (3) drawings and engineering orders. All subroutine descriptions, flowcharts, and MAIL LOG outputs are given and the data base design is described.

  20. Design and Implementation of a Metadata-rich File System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ames, S; Gokhale, M B; Maltzahn, C

    2010-01-19

    Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address thesemore » problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.« less

  1. 17 CFR 249.446 - Form ID, uniform application for access codes to file on EDGAR.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... log on to the EDGAR system, submit filings, and change its CCC. (d) Password Modification Authorization Code (PMAC)—allows a filer, filing agent or training agent to change its Password. [69 FR 22710... Sections Affected, which appears in the Finding Aids section of the printed volume and at at www.fdsys.gov. ...

  2. Collective operations in a file system based execution model

    DOEpatents

    Shinde, Pravin; Van Hensbergen, Eric

    2013-02-12

    A mechanism is provided for group communications using a MULTI-PIPE synthetic file system. A master application creates a multi-pipe synthetic file in the MULTI-PIPE synthetic file system, the master application indicating a multi-pipe operation to be performed. The master application then writes a header-control block of the multi-pipe synthetic file specifying at least one of a multi-pipe synthetic file system name, a message type, a message size, a specific destination, or a specification of the multi-pipe operation. Any other application participating in the group communications then opens the same multi-pipe synthetic file. A MULTI-PIPE file system module then implements the multi-pipe operation as identified by the master application. The master application and the other applications then either read or write operation messages to the multi-pipe synthetic file and the MULTI-PIPE synthetic file system module performs appropriate actions.

  3. Collective operations in a file system based execution model

    DOEpatents

    Shinde, Pravin; Van Hensbergen, Eric

    2013-02-19

    A mechanism is provided for group communications using a MULTI-PIPE synthetic file system. A master application creates a multi-pipe synthetic file in the MULTI-PIPE synthetic file system, the master application indicating a multi-pipe operation to be performed. The master application then writes a header-control block of the multi-pipe synthetic file specifying at least one of a multi-pipe synthetic file system name, a message type, a message size, a specific destination, or a specification of the multi-pipe operation. Any other application participating in the group communications then opens the same multi-pipe synthetic file. A MULTI-PIPE file system module then implements the multi-pipe operation as identified by the master application. The master application and the other applications then either read or write operation messages to the multi-pipe synthetic file and the MULTI-PIPE synthetic file system module performs appropriate actions.

  4. Performance of the Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the input/output (I/O) needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance 1/O to applications the applications that rely on them. In Section 3 we describe that access data in patterns that have been observed to be common.

  5. RAMA: A file system for massively parallel computers

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.; Katz, Randy H.

    1993-01-01

    This paper describes a file system design for massively parallel computers which makes very efficient use of a few disks per processor. This overcomes the traditional I/O bottleneck of massively parallel machines by storing the data on disks within the high-speed interconnection network. In addition, the file system, called RAMA, requires little inter-node synchronization, removing another common bottleneck in parallel processor file systems. Support for a large tertiary storage system can easily be integrated in lo the file system; in fact, RAMA runs most efficiently when tertiary storage is used.

  6. Methods and apparatus for capture and storage of semantic information with sub-files in a parallel computing system

    DOEpatents

    Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Torres, Aaron

    2015-02-03

    Techniques are provided for storing files in a parallel computing system using sub-files with semantically meaningful boundaries. A method is provided for storing at least one file generated by a distributed application in a parallel computing system. The file comprises one or more of a complete file and a plurality of sub-files. The method comprises the steps of obtaining a user specification of semantic information related to the file; providing the semantic information as a data structure description to a data formatting library write function; and storing the semantic information related to the file with one or more of the sub-files in one or more storage nodes of the parallel computing system. The semantic information provides a description of data in the file. The sub-files can be replicated based on semantically meaningful boundaries.

  7. 12 CFR Appendix E to Part 360 - Hold File Structure

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Hold File Structure E Appendix E to Part 360 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION REGULATIONS AND STATEMENTS OF GENERAL POLICY RESOLUTION AND RECEIVERSHIP RULES Pt. 360, App. E Appendix E to Part 360—Hold File Structure This is the...

  8. CaseLog: semantic network interface to a student computer-based patient record system.

    PubMed Central

    Cimino, C.; Goldman, E. K.; Curtis, J. A.; Reichgott, M. J.

    1993-01-01

    We have developed a computer program called CaseLog, which serves as an exemplary, computer-based patient record (CPR) system. The program allows for the introduction of the students to issues unique to patient record systems. These include record security, unique patient identifiers, and the use of controlled vocabularies. A particularly challenging aspect of the development of this program was allowing for student entry of controlled vocabulary terms. There were four goals we wished to achieve: students should be able to find the terms they are looking for; once a term has been found, it should be easy to find contextually related terms; it should be easy to determine that a sought-for term is not in the vocabulary; and the structure of the vocabulary should be dynamically altered by contextual information to allow its use for a variety of purposes. We chose a semantic network for our vocabulary structure. Within the processing power of the equipment we were working with, we achieved our goals. This paper will describe the development of the vocabulary, the design of the CaseLog program, and the feedback from student users of the program. PMID:8130581

  9. Virtual file system on NoSQL for processing high volumes of HL7 messages.

    PubMed

    Kimura, Eizen; Ishihara, Ken

    2015-01-01

    The Standardized Structured Medical Information Exchange (SS-MIX) is intended to be the standard repository for HL7 messages that depend on a local file system. However, its scalability is limited. We implemented a virtual file system using NoSQL to incorporate modern computing technology into SS-MIX and allow the system to integrate local patient IDs from different healthcare systems into a universal system. We discuss its implementation using the database MongoDB and describe its performance in a case study.

  10. Parental perceptions of the learner driver log book system in two Australian states.

    PubMed

    Bates, Lyndel; Watson, Barry; King, Mark Johann

    2014-01-01

    Though many jurisdictions internationally now require learner drivers to complete a specified number of hours of supervised driving practice before being able to drive unaccompanied, very few require learner drivers to complete a log book to record this practice and then present it to the licensing authority. Learner drivers in most Australian jurisdictions must complete a log book that records their practice, thereby confirming to the licensing authority that they have met the mandated hours of practice requirement. These log books facilitate the management and enforcement of minimum supervised hours of driving requirements. Parents of learner drivers in 2 Australian states, Queensland and New South Wales, completed an online survey assessing a range of factors, including their perceptions of the accuracy of their child's learner log book and the effectiveness of the log book system. The study indicates that the large majority of parents believe that their child's learner log book is accurate. However, they generally report that the log book system is only moderately effective as a system to measure the number of hours of supervised practice a learner driver has completed. The results of this study suggest the presence of a paradox, with many parents possibly believing that others are not as diligent in the use of log books as they are or that the system is too open to misuse. Given that many parents report that their child's log book is accurate, this study has important implications for the development and ongoing monitoring of hours of practice requirements in graduated driver licensing systems.

  11. Forest structure following tornado damage and salvage logging in northern Maine, USA

    Treesearch

    Shawn Fraver; Kevin J. Dodds; Laura S. Kenefic; Rick Morrill; Robert S. Seymour; Eben Sypitkowski

    2017-01-01

    Understanding forest structural changes resulting from postdisturbance management practices such as salvage logging is critical for predicting forest recovery and developing appropriate management strategies. In 2013, a tornado and subsequent salvage operations in northern Maine, USA, created three conditions (i.e., treatments) with contrasting forest structure:...

  12. Optimizing Input/Output Using Adaptive File System Policies

    NASA Technical Reports Server (NTRS)

    Madhyastha, Tara M.; Elford, Christopher L.; Reed, Daniel A.

    1996-01-01

    Parallel input/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and pre-fetching policies, while performance sensors provide feedback used to tune policy parameters for specific system environments. To illustrate the potential performance improvements possible using adaptive file system policies, we present results from experiments involving classification-based and performance-based steering.

  13. 46 CFR Appendix A to Part 530 - Instructions for the Filing of Service Contracts

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... file service contracts. BTCL will direct OIRM to provide approved filers with a log-on ID and password. Filers who wish a third party (publisher) to file their service contracts must so indicate on Form FMC-83... home page, http://www.fmc.gov. A. Registration, Log-on ID and Password To register for filing, a...

  14. 46 CFR Appendix A to Part 530 - Instructions for the Filing of Service Contracts

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... file service contracts. BTCL will direct OIRM to provide approved filers with a log-on ID and password. Filers who wish a third party (publisher) to file their service contracts must so indicate on Form FMC-83... home page, http://www.fmc.gov. A. Registration, Log-on ID and Password To register for filing, a...

  15. 6. Log calving barn. Interior view showing log postandbeam support ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. Log calving barn. Interior view showing log post-and-beam support system and animal stalls. - William & Lucina Bowe Ranch, Log Calving Barn, 230 feet south-southwest of House, Melrose, Silver Bow County, MT

  16. A simulation-based approach for evaluating logging residue handling systems.

    Treesearch

    B. Bruce Bare; Benjamin A. Jayne; Brian F. Anholt

    1976-01-01

    Describes a computer simulation model for evaluating logging residue handling systems. The flow of resources is traced through a prespecified combination of operations including yarding, chipping, sorting, loading, transporting, and unloading. The model was used to evaluate the feasibility of converting logging residues to chips that could be used, for example, to...

  17. Expansion of the roadway reference log : KYSPR-99-201.

    DOT National Transportation Integrated Search

    2000-05-01

    The objectives of this study were to: 1) expand the current route log to include milepoints for all intersections on state maintained roads and 2) recommend a procedure for establishing milepoints and maintaining the file with up-to-date information....

  18. Pastime--A System for File Compression.

    ERIC Educational Resources Information Center

    Hultgren, Jan; Larsson, Rolf

    An interactive search and editing system, 3RIP, is being developed at the library of the Royal Institute of Technology in Stockholm for large files of textual and numeric data. A substantial part (on the order of 10-E9 characters) of the primary file of the search system will consist of bibliographic references from a wide range of sources. If the…

  19. 78 FR 21930 - Aquenergy Systems, Inc.; Notice of Intent To File License Application, Filing of Pre-Application...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-12

    ... Systems, Inc.; Notice of Intent To File License Application, Filing of Pre-Application Document, and Approving Use of the Traditional Licensing Process a. Type of Filing: Notice of Intent to File License...: November 11, 2012. d. Submitted by: Aquenergy Systems, Inc., a fully owned subsidiaries of Enel Green Power...

  20. High-Performance, Multi-Node File Copies and Checksums for Clustered File Systems

    NASA Technical Reports Server (NTRS)

    Kolano, Paul Z.; Ciotti, Robert B.

    2012-01-01

    Modern parallel file systems achieve high performance using a variety of techniques, such as striping files across multiple disks to increase aggregate I/O bandwidth and spreading disks across multiple servers to increase aggregate interconnect bandwidth. To achieve peak performance from such systems, it is typically necessary to utilize multiple concurrent readers/writers from multiple systems to overcome various singlesystem limitations, such as number of processors and network bandwidth. The standard cp and md5sum tools of GNU coreutils found on every modern Unix/Linux system, however, utilize a single execution thread on a single CPU core of a single system, and hence cannot take full advantage of the increased performance of clustered file systems. Mcp and msum are drop-in replacements for the standard cp and md5sum programs that utilize multiple types of parallelism and other optimizations to achieve maximum copy and checksum performance on clustered file systems. Multi-threading is used to ensure that nodes are kept as busy as possible. Read/write parallelism allows individual operations of a single copy to be overlapped using asynchronous I/O. Multinode cooperation allows different nodes to take part in the same copy/checksum. Split-file processing allows multiple threads to operate concurrently on the same file. Finally, hash trees allow inherently serial checksums to be performed in parallel. Mcp and msum provide significant performance improvements over standard cp and md5sum using multiple types of parallelism and other optimizations. The total speed-ups from all improvements are significant. Mcp improves cp performance over 27x, msum improves md5sum performance almost 19x, and the combination of mcp and msum improves verified copies via cp and md5sum by almost 22x. These improvements come in the form of drop-in replacements for cp and md5sum, so are easily used and are available for download as open source software at http://mutil.sourceforge.net.

  1. Non-volatile main memory management methods based on a file system.

    PubMed

    Oikawa, Shuichi

    2014-01-01

    There are upcoming non-volatile (NV) memory technologies that provide byte addressability and high performance. PCM, MRAM, and STT-RAM are such examples. Such NV memory can be used as storage because of its data persistency without power supply while it can be used as main memory because of its high performance that matches up with DRAM. There are a number of researches that investigated its uses for main memory and storage. They were, however, conducted independently. This paper presents the methods that enables the integration of the main memory and file system management for NV memory. Such integration makes NV memory simultaneously utilized as both main memory and storage. The presented methods use a file system as their basis for the NV memory management. We implemented the proposed methods in the Linux kernel, and performed the evaluation on the QEMU system emulator. The evaluation results show that 1) the proposed methods can perform comparably to the existing DRAM memory allocator and significantly better than the page swapping, 2) their performance is affected by the internal data structures of a file system, and 3) the data structures appropriate for traditional hard disk drives do not always work effectively for byte addressable NV memory. We also performed the evaluation of the effects caused by the longer access latency of NV memory by cycle-accurate full-system simulation. The results show that the effect on page allocation cost is limited if the increase of latency is moderate.

  2. An Ephemeral Burst-Buffer File System for Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Teng; Moody, Adam; Yu, Weikuan

    BurstFS is a distributed file system for node-local burst buffers on high performance computing systems. BurstFS presents a shared file system space across the burst buffers so that applications that use shared files can access the highly-scalable burst buffers without changing their applications.

  3. From Log Files to Assessment Metrics: Measuring Students' Science Inquiry Skills Using Educational Data Mining

    ERIC Educational Resources Information Center

    Gobert, Janice D.; Sao Pedro, Michael; Raziuddin, Juelaila; Baker, Ryan S.

    2013-01-01

    We present a method for assessing science inquiry performance, specifically for the inquiry skill of designing and conducting experiments, using educational data mining on students' log data from online microworlds in the Inq-ITS system (Inquiry Intelligent Tutoring System; www.inq-its.org). In our approach, we use a 2-step process: First we use…

  4. An integrated 3D log processing optimization system for small sawmills in central Appalachia

    Treesearch

    Wenshu Lin; Jingxin Wang

    2013-01-01

    An integrated 3D log processing optimization system was developed to perform 3D log generation, opening face determination, headrig log sawing simulation, fl itch edging and trimming simulation, cant resawing, and lumber grading. A circular cross-section model, together with 3D modeling techniques, was used to reconstruct 3D virtual logs. Internal log defects (knots)...

  5. pcircle - A Suite of Scalable Parallel File System Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WANG, FEIYI

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.

  6. SU-C-BRD-03: Analysis of Accelerator Generated Text Logs for Preemptive Maintenance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Able, CM; Baydush, AH; Nguyen, C

    2014-06-15

    Purpose: To develop a model to analyze medical accelerator generated parameter and performance data that will provide an early warning of performance degradation and impending component failure. Methods: A robust 6 MV VMAT quality assurance treatment delivery was used to test the constancy of accelerator performance. The generated text log files were decoded and analyzed using statistical process control (SPC) methodology. The text file data is a single snapshot of energy specific and overall systems parameters. A total of 36 system parameters were monitored which include RF generation, electron gun control, energy control, beam uniformity control, DC voltage generation, andmore » cooling systems. The parameters were analyzed using Individual and Moving Range (I/MR) charts. The chart limits were calculated using a hybrid technique that included the use of the standard 3σ limits and the parameter/system specification. Synthetic errors/changes were introduced to determine the initial effectiveness of I/MR charts in detecting relevant changes in operating parameters. The magnitude of the synthetic errors/changes was based on: the value of 1 standard deviation from the mean operating parameter of 483 TB systems, a small fraction (≤ 5%) of the operating range, or a fraction of the minor fault deviation. Results: There were 34 parameters in which synthetic errors were introduced. There were 2 parameters (radial position steering coil, and positive 24V DC) in which the errors did not exceed the limit of the I/MR chart. The I chart limit was exceeded for all of the remaining parameters (94.2%). The MR chart limit was exceeded in 29 of the 32 parameters (85.3%) in which the I chart limit was exceeded. Conclusion: Statistical process control I/MR evaluation of text log file parameters may be effective in providing an early warning of performance degradation or component failure for digital medical accelerator systems. Research is Supported by Varian Medical Systems, Inc.« less

  7. Final Report for File System Support for Burst Buffers on HPC Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, W.; Mohror, K.

    Distributed burst buffers are a promising storage architecture for handling I/O workloads for exascale computing. As they are being deployed on more supercomputers, a file system that efficiently manages these burst buffers for fast I/O operations carries great consequence. Over the past year, FSU team has undertaken several efforts to design, prototype and evaluate distributed file systems for burst buffers on HPC systems. These include MetaKV: a Key-Value Store for Metadata Management of Distributed Burst Buffers, a user-level file system with multiple backends, and a specialized file system for large datasets of deep neural networks. Our progress for these respectivemore » efforts are elaborated further in this report.« less

  8. Leak checker data logging system

    DOEpatents

    Gannon, J.C.; Payne, J.J.

    1996-09-03

    A portable, high speed, computer-based data logging system for field testing systems or components located some distance apart employs a plurality of spaced mass spectrometers and is particularly adapted for monitoring the vacuum integrity of a long string of a superconducting magnets such as used in high energy particle accelerators. The system provides precise tracking of a gas such as helium through the magnet string when the helium is released into the vacuum by monitoring the spaced mass spectrometers allowing for control, display and storage of various parameters involved with leak detection and localization. A system user can observe the flow of helium through the magnet string on a real-time basis hour the exact moment of opening of the helium input valve. Graph reading can be normalized to compensate for magnet sections that deplete vacuum faster than other sections between testing to permit repetitive testing of vacuum integrity in reduced time. 18 figs.

  9. Leak checker data logging system

    DOEpatents

    Gannon, Jeffrey C.; Payne, John J.

    1996-01-01

    A portable, high speed, computer-based data logging system for field testing systems or components located some distance apart employs a plurality of spaced mass spectrometers and is particularly adapted for monitoring the vacuum integrity of a long string of a superconducting magnets such as used in high energy particle accelerators. The system provides precise tracking of a gas such as helium through the magnet string when the helium is released into the vacuum by monitoring the spaced mass spectrometers allowing for control, display and storage of various parameters involved with leak detection and localization. A system user can observe the flow of helium through the magnet string on a real-time basis hour the exact moment of opening of the helium input valve. Graph reading can be normalized to compensate for magnet sections that deplete vacuum faster than other sections between testing to permit repetitive testing of vacuum integrity in reduced time.

  10. Ontology based log content extraction engine for a posteriori security control.

    PubMed

    Azkia, Hanieh; Cuppens-Boulahia, Nora; Cuppens, Frédéric; Coatrieux, Gouenou

    2012-01-01

    In a posteriori access control, users are accountable for actions they performed and must provide evidence, when required by some legal authorities for instance, to prove that these actions were legitimate. Generally, log files contain the needed data to achieve this goal. This logged data can be recorded in several formats; we consider here IHE-ATNA (Integrating the healthcare enterprise-Audit Trail and Node Authentication) as log format. The difficulty lies in extracting useful information regardless of the log format. A posteriori access control frameworks often include a log filtering engine that provides this extraction function. In this paper we define and enforce this function by building an IHE-ATNA based ontology model, which we query using SPARQL, and show how the a posteriori security controls are made effective and easier based on this function.

  11. Linking log quality with product performance

    Treesearch

    D. W. Green; Robert Ross

    1997-01-01

    In the United States, log grading procedures use visual assessment of defects, in relation to the log scaling diameter, to estimate the yield of lumber that maybe expected from the log. This procedure was satisfactory when structural grades were based only on defect size and location. In recent years, however, structural products have increasingly been graded using a...

  12. 12 CFR Appendix C to Part 360 - Deposit File Structure

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... structure for the data file to provide deposit data to the FDIC. If data or information are not maintained... covered institution's understanding of its customers and the data maintained around deposit accounts... complete its insurance determination process, it may add this information to the end of this data file...

  13. Distributed PACS using distributed file system with hierarchical meta data servers.

    PubMed

    Hiroyasu, Tomoyuki; Minamitani, Yoshiyuki; Miki, Mitsunori; Yokouchi, Hisatake; Yoshimi, Masato

    2012-01-01

    In this research, we propose a new distributed PACS (Picture Archiving and Communication Systems) which is available to integrate several PACSs that exist in each medical institution. The conventional PACS controls DICOM file into one data-base. On the other hand, in the proposed system, DICOM file is separated into meta data and image data and those are stored individually. Using this mechanism, since file is not always accessed the entire data, some operations such as finding files, changing titles, and so on can be performed in high-speed. At the same time, as distributed file system is utilized, accessing image files can also achieve high-speed access and high fault tolerant. The introduced system has a more significant point. That is the simplicity to integrate several PACSs. In the proposed system, only the meta data servers are integrated and integrated system can be constructed. This system also has the scalability of file access with along to the number of file numbers and file sizes. On the other hand, because meta-data server is integrated, the meta data server is the weakness of this system. To solve this defect, hieratical meta data servers are introduced. Because of this mechanism, not only fault--tolerant ability is increased but scalability of file access is also increased. To discuss the proposed system, the prototype system using Gfarm was implemented. For evaluating the implemented system, file search operating time of Gfarm and NFS were compared.

  14. Description and Use of the Data Files on Military Careers. Information System for Vocational Decisions.

    ERIC Educational Resources Information Center

    Yee, Patricia; Seltzer, Joanna

    This paper summarizes the contents, structure and possible uses of the Information System for Vocational Decisions (ISVD) data file on military jobs in the 3 major services. In all, 170 specific career fields for enlisted men and 34 for officers are included in the data file, which also provides for converting the inquirer's personal…

  15. Fort Bliss Geothermal Area Data: Temperature profile, logs, schematic model and cross section

    DOE Data Explorer

    Adam Brandt

    2015-11-15

    This dataset contains a variety of data about the Fort Bliss geothermal area, part of the southern portion of the Tularosa Basin, New Mexico. The dataset contains schematic models for the McGregor Geothermal System, a shallow temperature survey of the Fort Bliss geothermal area. The dataset also contains Century OH logs, a full temperature profile, and complete logs from well RMI 56-5, including resistivity and porosity data, drill logs with drill rate, depth, lithology, mineralogy, fractures, temperature, pit total, gases, and descriptions among other measurements as well as CDL, CNL, DIL, GR Caliper and Temperature files. A shallow (2 meter depth) temperature survey of the Fort Bliss geothermal area with 63 data points is also included. Two cross sections through the Fort Bliss area, also included, show well position and depth. The surface map included shows faults and well spatial distribution. Inferred and observed fault distributions from gravity surveys around the Fort Bliss geothermal area.

  16. NCEP BUFR File Structure

    Science.gov Websites

    . These tables may be defined within a separate ASCII text file (see Description and Format of BUFR Tables time, the BUFR tables are usually read from an external ASCII text file (although it is also possible reports. Click here to view the ASCII text file (called /nwprod/fix/bufrtab.002 on the NCEP CCS machines

  17. Designing efficient logging systems for northern hardwoods using equipment production capabilities and costs.

    Treesearch

    R.B. Gardner

    1966-01-01

    Describes a typical logging system used in the Lake and Northeastern States, discusses each step in the operation, and presents a simple method for designing and efficient logging system for such an operation. Points out that a system should always be built around the key piece of equipment, which is usually the skidder. Specific equipment types and their production...

  18. Motives of Log Schemes

    NASA Astrophysics Data System (ADS)

    Howell, Nicholas L.

    This thesis introduces two notions of motive associated to a log scheme. We introduce a category of log motives a la Voevodsky, and prove that the embedding of Voevodsky motives is an equivalence, in particular proving that any homotopy-invariant cohomology theory of schemes extends uniquely to log schemes. In the case of a log smooth degeneration, we give an explicit construction of the motivic Albanese of the degeneration, and show that the Hodge realization of this construction gives the Albanese of the limit Hodge structure.

  19. Tuning HDF5 subfiling performance on parallel file systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byna, Suren; Chaarawi, Mohamad; Koziol, Quincey

    Subfiling is a technique used on parallel file systems to reduce locking and contention issues when multiple compute nodes interact with the same storage target node. Subfiling provides a compromise between the single shared file approach that instigates the lock contention problems on parallel file systems and having one file per process, which results in generating a massive and unmanageable number of files. In this paper, we evaluate and tune the performance of recently implemented subfiling feature in HDF5. In specific, we explain the implementation strategy of subfiling feature in HDF5, provide examples of using the feature, and evaluate andmore » tune parallel I/O performance of this feature with parallel file systems of the Cray XC40 system at NERSC (Cori) that include a burst buffer storage and a Lustre disk-based storage. We also evaluate I/O performance on the Cray XC30 system, Edison, at NERSC. Our results show performance benefits of 1.2X to 6X performance advantage with subfiling compared to writing a single shared HDF5 file. We present our exploration of configurations, such as the number of subfiles and the number of Lustre storage targets to storing files, as optimization parameters to obtain superior I/O performance. Based on this exploration, we discuss recommendations for achieving good I/O performance as well as limitations with using the subfiling feature.« less

  20. Magnetic susceptibility well-logging unit with single power supply thermoregulation system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seeley, R. L.

    1985-11-05

    The magnetic susceptibility well-logging unit with single power supply thermoregulation system provides power from a single surface power supply over a well-logging cable to an integrated circuit voltage regulator system downhole. This voltage regulator system supplies regulated voltages to a temperature control system and also to a Maxwell bridge sensing unit which includes the solenoid of a magnetic susceptibility probe. The temperature control system is provided with power from the voltage regulator system and operates to permit one of several predetermined temperatures to be chosen, and then operates to maintain the solenoid of a magnetic susceptibility probe at this chosenmore » temperature. The temperature control system responds to a temperature sensor mounted upon the probe solenoid to cause resistance heaters concentrically spaced from the probe solenoid to maintain the chosen temperature. A second temperature sensor on the probe solenoid provides a temperature signal to a temperature transmitting unit, which initially converts the sensed temperature to a representative voltage. This voltage is then converted to a representative current signal which is transmitted by current telemetry over the well logging cable to a surface electronic unit which then reconverts the current signal to a voltage signal.« less

  1. Transaction Logging.

    ERIC Educational Resources Information Center

    Jones, S.; And Others

    1997-01-01

    Discusses the use of transaction logging in Okapi-related projects to allow search algorithms and user interfaces to be investigated, evaluated, and compared. A series of examples is presented, illustrating logging software for character-based and graphical user interface systems, and demonstrating the usefulness of relational database management…

  2. Experimental Directory Structure (Exdir): An Alternative to HDF5 Without Introducing a New File Format

    PubMed Central

    Dragly, Svenn-Arne; Hobbi Mobarhan, Milad; Lepperød, Mikkel E.; Tennøe, Simen; Fyhn, Marianne; Hafting, Torkel; Malthe-Sørenssen, Anders

    2018-01-01

    Natural sciences generate an increasing amount of data in a wide range of formats developed by different research groups and commercial companies. At the same time there is a growing desire to share data along with publications in order to enable reproducible research. Open formats have publicly available specifications which facilitate data sharing and reproducible research. Hierarchical Data Format 5 (HDF5) is a popular open format widely used in neuroscience, often as a foundation for other, more specialized formats. However, drawbacks related to HDF5's complex specification have initiated a discussion for an improved replacement. We propose a novel alternative, the Experimental Directory Structure (Exdir), an open specification for data storage in experimental pipelines which amends drawbacks associated with HDF5 while retaining its advantages. HDF5 stores data and metadata in a hierarchy within a complex binary file which, among other things, is not human-readable, not optimal for version control systems, and lacks support for easy access to raw data from external applications. Exdir, on the other hand, uses file system directories to represent the hierarchy, with metadata stored in human-readable YAML files, datasets stored in binary NumPy files, and raw data stored directly in subdirectories. Furthermore, storing data in multiple files makes it easier to track for version control systems. Exdir is not a file format in itself, but a specification for organizing files in a directory structure. Exdir uses the same abstractions as HDF5 and is compatible with the HDF5 Abstract Data Model. Several research groups are already using data stored in a directory hierarchy as an alternative to HDF5, but no common standard exists. This complicates and limits the opportunity for data sharing and development of common tools for reading, writing, and analyzing data. Exdir facilitates improved data storage, data sharing, reproducible research, and novel insight from

  3. Experimental Directory Structure (Exdir): An Alternative to HDF5 Without Introducing a New File Format.

    PubMed

    Dragly, Svenn-Arne; Hobbi Mobarhan, Milad; Lepperød, Mikkel E; Tennøe, Simen; Fyhn, Marianne; Hafting, Torkel; Malthe-Sørenssen, Anders

    2018-01-01

    Natural sciences generate an increasing amount of data in a wide range of formats developed by different research groups and commercial companies. At the same time there is a growing desire to share data along with publications in order to enable reproducible research. Open formats have publicly available specifications which facilitate data sharing and reproducible research. Hierarchical Data Format 5 (HDF5) is a popular open format widely used in neuroscience, often as a foundation for other, more specialized formats. However, drawbacks related to HDF5's complex specification have initiated a discussion for an improved replacement. We propose a novel alternative, the Experimental Directory Structure (Exdir), an open specification for data storage in experimental pipelines which amends drawbacks associated with HDF5 while retaining its advantages. HDF5 stores data and metadata in a hierarchy within a complex binary file which, among other things, is not human-readable, not optimal for version control systems, and lacks support for easy access to raw data from external applications. Exdir, on the other hand, uses file system directories to represent the hierarchy, with metadata stored in human-readable YAML files, datasets stored in binary NumPy files, and raw data stored directly in subdirectories. Furthermore, storing data in multiple files makes it easier to track for version control systems. Exdir is not a file format in itself, but a specification for organizing files in a directory structure. Exdir uses the same abstractions as HDF5 and is compatible with the HDF5 Abstract Data Model. Several research groups are already using data stored in a directory hierarchy as an alternative to HDF5, but no common standard exists. This complicates and limits the opportunity for data sharing and development of common tools for reading, writing, and analyzing data. Exdir facilitates improved data storage, data sharing, reproducible research, and novel insight from

  4. Console Log Keeping Made Easier - Tools and Techniques for Improving Quality of Flight Controller Activity Logs

    NASA Technical Reports Server (NTRS)

    Scott, David W.; Underwood, Debrah (Technical Monitor)

    2002-01-01

    At the Marshall Space Flight Center's (MSFC) Payload Operations Integration Center (POIC) for International Space Station (ISS), each flight controller maintains detailed logs of activities and communications at their console position. These logs are critical for accurately controlling flight in real-time as well as providing a historical record and troubleshooting tool. This paper describes logging methods and electronic formats used at the POIC and provides food for thought on their strengths and limitations, plus proposes some innovative extensions. It also describes an inexpensive PC-based scheme for capturing and/or transcribing audio clips from communications consoles. Flight control activity (e.g. interpreting computer displays, entering data/issuing electronic commands, and communicating with others) can become extremely intense. It's essential to document it well, but the effort to do so may conflict with actual activity. This can be more than just annoying, as what's in the logs (or just as importantly not in them) often feeds back directly into the quality of future operations, whether short-term or long-term. In earlier programs, such as Spacelab, log keeping was done on paper, often using position-specific shorthand, and the other reader was at the mercy of the writer's penmanship. Today, user-friendly software solves the legibility problem and can automate date/time entry, but some content may take longer to finish due to individual typing speed and less use of symbols. File layout can be used to great advantage in making types of information easy to find, and creating searchable master logs for a given position is very easy and a real lifesaver in reconstructing events or researching a given topic. We'll examine log formats from several console position, and the types of information that are included and (just as importantly) excluded. We'll also look at when a summary or synopsis is effective, and when extensive detail is needed.

  5. Reliable file sharing in distributed operating system using web RTC

    NASA Astrophysics Data System (ADS)

    Dukiya, Rajesh

    2017-12-01

    Since, the evolution of distributed operating system, distributed file system is come out to be important part in operating system. P2P is a reliable way in Distributed Operating System for file sharing. It was introduced in 1999, later it became a high research interest topic. Peer to Peer network is a type of network, where peers share network workload and other load related tasks. A P2P network can be a period of time connection, where a bunch of computers connected by a USB (Universal Serial Bus) port to transfer or enable disk sharing i.e. file sharing. Currently P2P requires special network that should be designed in P2P way. Nowadays, there is a big influence of browsers in our life. In this project we are going to study of file sharing mechanism in distributed operating system in web browsers, where we will try to find performance bottlenecks which our research will going to be an improvement in file sharing by performance and scalability in distributed file systems. Additionally, we will discuss the scope of Web Torrent file sharing and free-riding in peer to peer networks.

  6. Geology of the surficial aquifer system, Dade County, Florida; lithologic logs

    USGS Publications Warehouse

    Causaras, C.R.

    1986-01-01

    The geologic framework of the surficial aquifer system in Dade County, Florida, was investigated as part of a longterm study by the USGS in cooperation with the South Florida Water Management District, to describe the geology, hydrologic characteristics, and groundwater quality of the surficial aquifer system. Thirty-three test wells were drilled completely through the surficial aquifer system and into the underlying, relatively impermeable units of the Tamiami and Hawthorn Formations. Detailed lithologic logs were made from microscopic examination of rock cuttings and cores obtained from these wells. The logs were used to prepare geologic sections that show the lithologic variations, thickness of the lithologic units, and different geologic formations that comprise the aquifers system. (Author 's abstract)

  7. DIY soundcard based temperature logging system. Part II: applications

    NASA Astrophysics Data System (ADS)

    Nunn, John

    2016-11-01

    This paper demonstrates some simple applications of how temperature logging systems may be used to monitor simple heat experiments, and how the data obtained can be analysed to get some additional insight into the physical processes.

  8. Cost of wetland protection using cable logging systems

    Treesearch

    Chris B. LeDoux; John E. Baumgras

    1990-01-01

    Forest managers, loggers, land-use planners, and other decision makers need an understanding of estimating the cost of protecting wetlands using cable logging systems to harvest timber products. Results suggest that protection costs can range from $244.75 to $489.50 per acre depending on the degree of protection desired.

  9. NASA work unit system file maintenance manual

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The NASA Work Unit System is a management information system for research tasks (i.e., work units) performed under NASA grants and contracts. It supplies profiles on research efforts and statistics on fund distribution. The file maintenance operator can add, delete and change records at a remote terminal or can submit punched cards to the computer room for batch update. The system is designed for file maintenance by a person with little or no knowledge of data processing techniques.

  10. The new idea of transporting tailings-logs in tailings slurry pipeline and the innovation of technology of mining waste-fill method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin Yu; Wang Fuji; Tao Yan

    2000-07-01

    This paper introduced a new idea of transporting mine tailings-logs in mine tailings-slurry pipeline and a new technology of mine cemented filing of tailings-logs with tailings-slurry. The hydraulic principles, the compaction of tailings-logs and the mechanic function of fillbody of tailings-logs cemented by tailings-slurry have been discussed.

  11. File-System Workload on a Scientific Multiprocessor

    NASA Technical Reports Server (NTRS)

    Kotz, David; Nieuwejaar, Nils

    1995-01-01

    Many scientific applications have intense computational and I/O requirements. Although multiprocessors have permitted astounding increases in computational performance, the formidable I/O needs of these applications cannot be met by current multiprocessors a their I/O subsystems. To prevent I/O subsystems from forever bottlenecking multiprocessors and limiting the range of feasible applications, new I/O subsystems must be designed. The successful design of computer systems (both hardware and software) depends on a thorough understanding of their intended use. A system designer optimizes the policies and mechanisms for the cases expected to most common in the user's workload. In the case of multiprocessor file systems, however, designers have been forced to build file systems based only on speculation about how they would be used, extrapolating from file-system characterizations of general-purpose workloads on uniprocessor and distributed systems or scientific workloads on vector supercomputers (see sidebar on related work). To help these system designers, in June 1993 we began the Charisma Project, so named because the project sought to characterize 1/0 in scientific multiprocessor applications from a variety of production parallel computing platforms and sites. The Charisma project is unique in recording individual read and write requests-in live, multiprogramming, parallel workloads (rather than from selected or nonparallel applications). In this article, we present the first results from the project: a characterization of the file-system workload an iPSC/860 multiprocessor running production, parallel scientific applications at NASA's Ames Research Center.

  12. Expert systems for automated correlation and interpretation of wireline logs

    USGS Publications Warehouse

    Olea, R.A.

    1994-01-01

    CORRELATOR is an interactive computer program for lithostratigraphic correlation of wireline logs able to store correlations in a data base with a consistency, accuracy, speed, and resolution that are difficult to obtain manually. The automatic determination of correlations is based on the maximization of a weighted correlation coefficient using two wireline logs per well. CORRELATOR has an expert system to scan and flag incongruous correlations in the data base. The user has the option to accept or disregard the advice offered by the system. The expert system represents knowledge through production rules. The inference system is goal-driven and uses backward chaining to scan through the rules. Work in progress is used to illustrate the potential that a second expert system with a similar architecture for interpreting dip diagrams could have to identify episodes-as those of interest in sequence stratigraphy and fault detection- and annotate them in the stratigraphic column. Several examples illustrate the presentation. ?? 1994 International Association for Mathematical Geology.

  13. Structure, porosity and stress regime of the upper oceanic crust: Sonic and ultrasonic logging of DSDP Hole 504B

    USGS Publications Warehouse

    Newmark, R.L.; Anderson, R.N.; Moos, D.; Zoback, M.D.

    1985-01-01

    The layered structure of the oceanic crust is characterized by changes in geophysical gradients rather than by abrupt layer boundaries. Correlation of geophysical logs and cores recovered from DSDP Hole 504B provides some insight into the physical properties which control these gradient changes. Borehole televiewer logging in Hole 504B provides a continuous image of wellbore reflectivity into the oceanic crust, revealing detailed structures not apparent otherwise, due to the low percentage of core recovery. Physical characteristics of the crustal layers 2A, 2B and 2C such as the detailed sonic velocity and lithostratigraphic structure are obtained through analysis of the sonic, borehole televiewer and electrical resistivity logs. A prediction of bulk hydrated mineral content, consistent with comparison to the recovered material, suggests a change in the nature of the alteration with depth. Data from the sonic, borehole televiewer, electrical resistivity and other porosity-sensitive logs are used to calculate the variation of porosity in the crustal layers 2A, 2B and 2C. Several of the well logs which are sensitive to the presence of fractures and open porosity in the formation indicate many zones of intense fracturing. Interpretation of these observations suggests that there may be a fundamental pattern of cooling-induced structure in the oceanic crust. ?? 1985.

  14. Tuning HDF5 for Lustre File Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howison, Mark; Koziol, Quincey; Knaak, David

    2010-09-24

    HDF5 is a cross-platform parallel I/O library that is used by a wide variety of HPC applications for the flexibility of its hierarchical object-database representation of scientific data. We describe our recent work to optimize the performance of the HDF5 and MPI-IO libraries for the Lustre parallel file system. We selected three different HPC applications to represent the diverse range of I/O requirements, and measured their performance on three different systems to demonstrate the robustness of our optimizations across different file system configurations and to validate our optimization strategy. We demonstrate that the combined optimizations improve HDF5 parallel I/O performancemore » by up to 33 times in some cases running close to the achievable peak performance of the underlying file system and demonstrate scalable performance up to 40,960-way concurrency.« less

  15. Personalization of structural PDB files.

    PubMed

    Woźniak, Tomasz; Adamiak, Ryszard W

    2013-01-01

    PDB format is most commonly applied by various programs to define three-dimensional structure of biomolecules. However, the programs often use different versions of the format. Thus far, no comprehensive solution for unifying the PDB formats has been developed. Here we present an open-source, Python-based tool called PDBinout for processing and conversion of various versions of PDB file format for biostructural applications. Moreover, PDBinout allows to create one's own PDB versions. PDBinout is freely available under the LGPL licence at http://pdbinout.ibch.poznan.pl.

  16. Acoustic sorting models for improved log segregation

    Treesearch

    Xiping Wang; Steve Verrill; Eini Lowell; Robert J. Ross; Vicki L. Herian

    2013-01-01

    In this study, we examined three individual log measures (acoustic velocity, log diameter, and log vertical position in a tree) for their ability to predict average modulus of elasticity (MOE) and grade yield of structural lumber obtained from Douglas-fir (Pseudotsuga menziesii [Mirb. Franco]) logs. We found that log acoustic velocity only had a...

  17. 12 CFR Appendix D to Part 360 - Sweep/Automated Credit Account File Structure

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    .../Automated Credit Account File Structure This is the structure of the data file to provide information to the... remainder of the data fields defined below should be populated. For data provided in the Sweep/Automated... number. The Account Identifier may be composed of more than one physical data element. If multiple fields...

  18. 12 CFR Appendix D to Part 360 - Sweep/Automated Credit Account File Structure

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    .../Automated Credit Account File Structure This is the structure of the data file to provide information to the... remainder of the data fields defined below should be populated. For data provided in the Sweep/Automated... number. The Account Identifier may be composed of more than one physical data element. If multiple fields...

  19. 12 CFR Appendix D to Part 360 - Sweep/Automated Credit Account File Structure

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    .../Automated Credit Account File Structure This is the structure of the data file to provide information to the... remainder of the data fields defined below should be populated. For data provided in the Sweep/Automated... number. The Account Identifier may be composed of more than one physical data element. If multiple fields...

  20. 12 CFR Appendix D to Part 360 - Sweep/Automated Credit Account File Structure

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    .../Automated Credit Account File Structure This is the structure of the data file to provide information to the... remainder of the data fields defined below should be populated. For data provided in the Sweep/Automated... number. The Account Identifier may be composed of more than one physical data element. If multiple fields...

  1. Remote file inquiry (RFI) system

    NASA Technical Reports Server (NTRS)

    1975-01-01

    System interrogates and maintains user-definable data files from remote terminals, using English-like, free-form query language easily learned by persons not proficient in computer programming. System operates in asynchronous mode, allowing any number of inquiries within limitation of available core to be active concurrently.

  2. Request queues for interactive clients in a shared file system of a parallel computing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin

    Interactive requests are processed from users of log-in nodes. A metadata server node is provided for use in a file system shared by one or more interactive nodes and one or more batch nodes. The interactive nodes comprise interactive clients to execute interactive tasks and the batch nodes execute batch jobs for one or more batch clients. The metadata server node comprises a virtual machine monitor; an interactive client proxy to store metadata requests from the interactive clients in an interactive client queue; a batch client proxy to store metadata requests from the batch clients in a batch client queue;more » and a metadata server to store the metadata requests from the interactive client queue and the batch client queue in a metadata queue based on an allocation of resources by the virtual machine monitor. The metadata requests can be prioritized, for example, based on one or more of a predefined policy and predefined rules.« less

  3. 12 CFR Appendix B to Part 360 - Debit/Credit File Structure

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Debit/Credit File Structure B Appendix B to Part 360 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION REGULATIONS AND STATEMENTS OF GENERAL POLICY RESOLUTION AND RECEIVERSHIP RULES Pt. 360, App. B Appendix B to Part 360—Debit/Credit File...

  4. Digital Libraries: The Next Generation in File System Technology.

    ERIC Educational Resources Information Center

    Bowman, Mic; Camargo, Bill

    1998-01-01

    Examines file sharing within corporations that use wide-area, distributed file systems. Applications and user interactions strongly suggest that the addition of services typically associated with digital libraries (content-based file location, strongly typed objects, representation of complex relationships between documents, and extrinsic…

  5. USGS Polar Temperature Logging System, Description and Measurement Uncertainties

    USGS Publications Warehouse

    Clow, Gary D.

    2008-01-01

    This paper provides an updated technical description of the USGS Polar Temperature Logging System (PTLS) and a complete assessment of the measurement uncertainties. This measurement system is used to acquire subsurface temperature data for climate-change detection in the polar regions and for reconstructing past climate changes using the 'borehole paleothermometry' inverse method. Specifically designed for polar conditions, the PTLS can measure temperatures as low as -60 degrees Celsius with a sensitivity ranging from 0.02 to 0.19 millikelvin (mK). A modular design allows the PTLS to reach depths as great as 4.5 kilometers with a skid-mounted winch unit or 650 meters with a small helicopter-transportable unit. The standard uncertainty (uT) of the ITS-90 temperature measurements obtained with the current PTLS range from 3.0 mK at -60 degrees Celsius to 3.3 mK at 0 degrees Celsius. Relative temperature measurements used for borehole paleothermometry have a standard uncertainty (urT) whose upper limit ranges from 1.6 mK at -60 degrees Celsius to 2.0 mK at 0 degrees Celsius. The uncertainty of a temperature sensor's depth during a log depends on specific borehole conditions and the temperature near the winch and thus must be treated on a case-by-case basis. However, recent experience indicates that when logging conditions are favorable, the 4.5-kilometer system is capable of producing depths with a standard uncertainty (uZ) on the order of 200-250 parts per million.

  6. 75 FR 27986 - Electronic Filing System-Web (EFS-Web) Contingency Option

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-19

    ...] Electronic Filing System--Web (EFS-Web) Contingency Option AGENCY: United States Patent and Trademark Office... availability of its patent electronic filing system, Electronic Filing System--Web (EFS-Web) by providing a new contingency option when the primary portal to EFS-Web has an unscheduled outage. Previously, the entire EFS...

  7. The medium is NOT the message or Indefinitely long-term file storage at Leeds University

    NASA Technical Reports Server (NTRS)

    Holdsworth, David

    1996-01-01

    Approximately 3 years ago we implemented an archive file storage system which embodies experiences gained over more than 25 years of using and writing file storage systems. It is the third in-house system that we have written, and all three systems have been adopted by other institutions. This paper discusses the requirements for long-term data storage in a university environment, and describes how our present system is designed to meet these requirements indefinitely. Particular emphasis is laid on experiences from past systems, and their influence on current system design. We also look at the influence of the IEEE-MSS standard. We currently have the system operating in five UK universities. The system operates in a multi-server environment, and is currently operational with UNIX (SunOS4, Solaris2, SGI-IRIX, HP-UX), NetWare3 and NetWare4. PCs logged on to NetWare can also archive and recover files that live on their hard disks.

  8. The storage system of PCM based on random access file system

    NASA Astrophysics Data System (ADS)

    Han, Wenbing; Chen, Xiaogang; Zhou, Mi; Li, Shunfen; Li, Gezi; Song, Zhitang

    2016-10-01

    Emerging memory technologies such as Phase change memory (PCM) tend to offer fast, random access to persistent storage with better scalability. It's a hot topic of academic and industrial research to establish PCM in storage hierarchy to narrow the performance gap. However, the existing file systems do not perform well with the emerging PCM storage, which access storage medium via a slow, block-based interface. In this paper, we propose a novel file system, RAFS, to bring about good performance of PCM, which is built in the embedded platform. We attach PCM chips to the memory bus and build RAFS on the physical address space. In the proposed file system, we simplify traditional system architecture to eliminate block-related operations and layers. Furthermore, we adopt memory mapping and bypassed page cache to reduce copy overhead between the process address space and storage device. XIP mechanisms are also supported in RAFS. To the best of our knowledge, we are among the first to implement file system on real PCM chips. We have analyzed and evaluated its performance with IOZONE benchmark tools. Our experimental results show that the RAFS on PCM outperforms Ext4fs on SDRAM with small record lengths. Based on DRAM, RAFS is significantly faster than Ext4fs by 18% to 250%.

  9. Simulation Control Graphical User Interface Logging Report

    NASA Technical Reports Server (NTRS)

    Hewling, Karl B., Jr.

    2012-01-01

    One of the many tasks of my project was to revise the code of the Simulation Control Graphical User Interface (SIM GUI) to enable logging functionality to a file. I was also tasked with developing a script that directed the startup and initialization flow of the various LCS software components. This makes sure that a software component will not spin up until all the appropriate dependencies have been configured properly. Also I was able to assist hardware modelers in verifying the configuration of models after they have been upgraded to a new software version. I developed some code that analyzes the MDL files to determine if any error were generated due to the upgrade process. Another one of the projects assigned to me was supporting the End-to-End Hardware/Software Daily Tag-up meeting.

  10. EVALUATED NUCLEAR STRUCTURE DATA FILE AND RELATED PRODUCTS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    TULI,J.K.

    The Evaluated Nuclear Structure Data File (ENSDF) is a leading resource for the experimental nuclear data. It is maintained and distributed by the National Nuclear Data Center, Brookhaven National Laboratory. The file is mainly contributed to by an international network of evaluators under the auspice of the International Atomic Energy Agency. The ENSDF is updated, generally by mass number, i.e., evaluating together all isobars for a given mass number. If, however, experimental activity in an isobaric chain is limited to a particular nuclide then only that nuclide is updated. The evaluations are published in the journal Nuclear Data Sheets, Academicmore » Press, a division of Elsevier.« less

  11. Nondestructive evaluation for sorting red maple logs

    Treesearch

    Xiping Wang; Robert J. Ross; David W. Green; Karl Englund; Michael Wolcott

    2000-01-01

    Existing log grading procedures in the United States make only visual assessments of log quality. These procedures do not incorporate estimates of the modulus of elasticity (MOE) of logs. It is questionable whether the visual grading procedures currently used for logs adequately assess the potential quality of structural products manufactured from them, especially...

  12. Cambridge Crystallographic Data Centre. II. Structural Data File

    ERIC Educational Resources Information Center

    Allen, F. H.; And Others

    1973-01-01

    The Cambridge Crystallographic Data Centre is concerned with the retrieval, evaluation, synthesis, and dissemination of structural data obtained by diffraction methods. This article (Part I is EJ053033) describes the work of the center and deals with the organization and maintenance of a computerized file of numeric crystallographic structural…

  13. Interpretation of well logs in a carbonate aquifer

    USGS Publications Warehouse

    MacCary, L.M.

    1978-01-01

    This report describes the log analysis of the Randolph and Sabial core holes in the Edwards aquifer in Texas, with particular attention to the principles that can be applied generally to any carbonate system. The geologic and hydrologic data were obtained during the drilling of the two holes, from extensive laboratory analysis of the cores, and from numerous geophysical logs run in the two holes. Some logging methods are inherently superiors to others for the analysis of limestone and dolomite aquifers. Three such systems are the dentistry, neutron, and acoustic-velocity (sonic) logs. Most of the log analysis described here is based on the interpretation of suites of logs from these three systems. In certain instances, deeply focused resistivity logs can be used to good advantage in carbonate rock studies; this technique is used to computer the water resistivity in the Randolph core hole. The rocks penetrated by the Randolph core hole are typical of those carbonates that have undergone very little solution by recent ground-water circulation. There are few large solutional openings; the water is saline; and the rocks are dark, dolomitic, have pore space that is interparticle or intercrystalline, and contain unoxidized organic material. The total porosity of rocks in the saline zone is higher than that of rocks in the fresh-water aquifer; however, the intrinsic permeability is much less in the saline zone because there are fewer large solutional openings. The Sabinal core hole penetrates a carbonate environment that has experienced much solution by ground water during recent geologic time. The rocks have high secondary porosities controlled by sedimentary structures within the rock; the water is fresh; and the dominant rock composition is limestone. The relative percentages of limestone and dolomite, the average matrix (grain) densities of the rock mixtures , and the porosity of the rock mass can be calculated from density, neutron, and acoustic logs. With supporting

  14. 78 FR 40474 - Sustaining Power Solutions LLC; Supplemental Notice That Initial Market-Based Rate Filing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-05

    .... Select the eFiling link to log on and submit the intervention or protests. Persons unable to file... to protest should file with the Federal Energy Regulatory Commission, 888 First Street NE... facilitate electronic service, persons with Internet access who will eFile a document and/or be listed as a...

  15. Logging concessions enable illegal logging crisis in the Peruvian Amazon.

    PubMed

    Finer, Matt; Jenkins, Clinton N; Sky, Melissa A Blue; Pine, Justin

    2014-04-17

    The Peruvian Amazon is an important arena in global efforts to promote sustainable logging in the tropics. Despite recent efforts to achieve sustainability, such as provisions in the US-Peru Trade Promotion Agreement, illegal logging continues to plague the region. We present evidence that Peru's legal logging concession system is enabling the widespread illegal logging via the regulatory documents designed to ensure sustainable logging. Analyzing official government data, we found that 68.3% of all concessions supervised by authorities were suspected of major violations. Of the 609 total concessions, nearly 30% have been cancelled for violations and we expect this percentage to increase as investigations continue. Moreover, the nature of the violations indicate that the permits associated with legal concessions are used to harvest trees in unauthorized areas, thus threatening all forested areas. Many of the violations pertain to the illegal extraction of CITES-listed timber species outside authorized areas. These findings highlight the need for additional reforms.

  16. Logging Concessions Enable Illegal Logging Crisis in the Peruvian Amazon

    PubMed Central

    Finer, Matt; Jenkins, Clinton N.; Sky, Melissa A. Blue; Pine, Justin

    2014-01-01

    The Peruvian Amazon is an important arena in global efforts to promote sustainable logging in the tropics. Despite recent efforts to achieve sustainability, such as provisions in the US–Peru Trade Promotion Agreement, illegal logging continues to plague the region. We present evidence that Peru's legal logging concession system is enabling the widespread illegal logging via the regulatory documents designed to ensure sustainable logging. Analyzing official government data, we found that 68.3% of all concessions supervised by authorities were suspected of major violations. Of the 609 total concessions, nearly 30% have been cancelled for violations and we expect this percentage to increase as investigations continue. Moreover, the nature of the violations indicate that the permits associated with legal concessions are used to harvest trees in unauthorized areas, thus threatening all forested areas. Many of the violations pertain to the illegal extraction of CITES-listed timber species outside authorized areas. These findings highlight the need for additional reforms. PMID:24743552

  17. Logging Concessions Enable Illegal Logging Crisis in the Peruvian Amazon

    NASA Astrophysics Data System (ADS)

    Finer, Matt; Jenkins, Clinton N.; Sky, Melissa A. Blue; Pine, Justin

    2014-04-01

    The Peruvian Amazon is an important arena in global efforts to promote sustainable logging in the tropics. Despite recent efforts to achieve sustainability, such as provisions in the US-Peru Trade Promotion Agreement, illegal logging continues to plague the region. We present evidence that Peru's legal logging concession system is enabling the widespread illegal logging via the regulatory documents designed to ensure sustainable logging. Analyzing official government data, we found that 68.3% of all concessions supervised by authorities were suspected of major violations. Of the 609 total concessions, nearly 30% have been cancelled for violations and we expect this percentage to increase as investigations continue. Moreover, the nature of the violations indicate that the permits associated with legal concessions are used to harvest trees in unauthorized areas, thus threatening all forested areas. Many of the violations pertain to the illegal extraction of CITES-listed timber species outside authorized areas. These findings highlight the need for additional reforms.

  18. The Feasibility of Using Cluster Analysis to Examine Log Data from Educational Video Games. CRESST Report 790

    ERIC Educational Resources Information Center

    Kerr, Deirdre; Chung, Gregory K. W. K.; Iseli, Markus R.

    2011-01-01

    Analyzing log data from educational video games has proven to be a challenging endeavor. In this paper, we examine the feasibility of using cluster analysis to extract information from the log files that is interpretable in both the context of the game and the context of the subject area. If cluster analysis can be used to identify patterns of…

  19. SU-F-T-465: Two Years of Radiotherapy Treatments Analyzed Through MLC Log Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Defoor, D; Kabat, C; Papanikolaou, N

    Purpose: To present treatment statistics of a Varian Novalis Tx using more than 90,000 Varian Dynalog files collected over the past 2 years. Methods: Varian Dynalog files are recorded for every patient treated on our Varian Novalis Tx. The files are collected and analyzed daily to check interfraction agreement of treatment deliveries. This is accomplished by creating fluence maps from the data contained in the Dynalog files. From the Dynalog files we have also compiled statistics for treatment delivery times, MLC errors, gantry errors and collimator errors. Results: The mean treatment time for VMAT patients was 153 ± 86 secondsmore » while the mean treatment time for step & shoot was 256 ± 149 seconds. Patient’s treatment times showed a variation of 0.4% over there treatment course for VMAT and 0.5% for step & shoot. The average field sizes were 40 cm2 and 26 cm2 for VMAT and step & shoot respectively. VMAT beams contained and average overall leaf travel of 34.17 meters and step & shoot beams averaged less than half of that at 15.93 meters. When comparing planned and delivered fluence maps generated using the Dynalog files VMAT plans showed an average gamma passing percentage of 99.85 ± 0.47. Step & shoot plans showed an average gamma passing percentage of 97.04 ± 0.04. 5.3% of beams contained an MLC error greater than 1 mm and 2.4% had an error greater than 2mm. The mean gantry speed for VMAT plans was 1.01 degrees/s with a maximum of 6.5 degrees/s. Conclusion: Varian Dynalog files are useful for monitoring machine performance treatment parameters. The Dynalog files have shown that the performance of the Novalis Tx is consistent over the course of a patients treatment with only slight variations in patient treatment times and a low rate of MLC errors.« less

  20. A Patient Record-Filing System for Family Practice

    PubMed Central

    Levitt, Cheryl

    1988-01-01

    The efficient storage and easy retrieval of quality records are a central concern of good family practice. Many physicians starting out in practice have difficulty choosing a practical and lasting system for storing their records. Some who have established practices are installing computers in their offices and finding that their filing systems are worn, outdated, and incompatible with computerized systems. This article describes a new filing system installed simultaneously with a new computer system in a family-practice teaching centre. The approach adopted solved all identifiable problems and is applicable in family practices of all sizes.

  1. The Future of the Andrew File System

    ScienceCinema

    Brashear, Drrick; Altman, Jeffry

    2018-05-25

    The talk will discuss the ten operational capabilities that have made AFS unique in the distributed file system space and how these capabilities are being expanded upon to meet the needs of the 21st century. Derrick Brashear and Jeffrey Altman will present a technical road map of new features and technical innovations that are under development by the OpenAFS community and Your File System, Inc. funded by a U.S. Department of Energy Small Business Innovative Research grant. The talk will end with a comparison of AFS to its modern days competitors.

  2. The Future of the Andrew File System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brashear, Drrick; Altman, Jeffry

    2011-02-23

    The talk will discuss the ten operational capabilities that have made AFS unique in the distributed file system space and how these capabilities are being expanded upon to meet the needs of the 21st century. Derrick Brashear and Jeffrey Altman will present a technical road map of new features and technical innovations that are under development by the OpenAFS community and Your File System, Inc. funded by a U.S. Department of Energy Small Business Innovative Research grant. The talk will end with a comparison of AFS to its modern days competitors.

  3. Storing files in a parallel computing system based on user or application specification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faibish, Sorin; Bent, John M.; Nick, Jeffrey M.

    2016-03-29

    Techniques are provided for storing files in a parallel computing system based on a user-specification. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a specification from the distributed application indicating how the plurality of files should be stored; and storing one or more of the plurality of files in one or more storage nodes of a multi-tier storage system based on the specification. The plurality of files comprise a plurality of complete files and/or a plurality of sub-files. The specification can optionally be processed by a daemon executing on onemore » or more nodes in a multi-tier storage system. The specification indicates how the plurality of files should be stored, for example, identifying one or more storage nodes where the plurality of files should be stored.« less

  4. Effects of logging and recruitment on community phylogenetic structure in 32 permanent forest plots of Kampong Thom, Cambodia

    PubMed Central

    Toyama, Hironori; Kajisa, Tsuyoshi; Tagane, Shuichiro; Mase, Keiko; Chhang, Phourin; Samreth, Vanna; Ma, Vuthy; Sokh, Heng; Ichihashi, Ryuji; Onoda, Yusuke; Mizoue, Nobuya; Yahara, Tetsukazu

    2015-01-01

    Ecological communities including tropical rainforest are rapidly changing under various disturbances caused by increasing human activities. Recently in Cambodia, illegal logging and clear-felling for agriculture have been increasing. Here, we study the effects of logging, mortality and recruitment of plot trees on phylogenetic community structure in 32 plots in Kampong Thom, Cambodia. Each plot was 0.25 ha; 28 plots were established in primary evergreen forests and four were established in secondary dry deciduous forests. Measurements were made in 1998, 2000, 2004 and 2010, and logging, recruitment and mortality of each tree were recorded. We estimated phylogeny using rbcL and matK gene sequences and quantified phylogenetic α and β diversity. Within communities, logging decreased phylogenetic diversity, and increased overall phylogenetic clustering and terminal phylogenetic evenness. Between communities, logging increased phylogenetic similarity between evergreen and deciduous plots. On the other hand, recruitment had opposite effects both within and between communities. The observed patterns can be explained by environmental homogenization under logging. Logging is biased to particular species and larger diameter at breast height, and forest patrol has been effective in decreasing logging. PMID:25561669

  5. Effects of logging and recruitment on community phylogenetic structure in 32 permanent forest plots of Kampong Thom, Cambodia.

    PubMed

    Toyama, Hironori; Kajisa, Tsuyoshi; Tagane, Shuichiro; Mase, Keiko; Chhang, Phourin; Samreth, Vanna; Ma, Vuthy; Sokh, Heng; Ichihashi, Ryuji; Onoda, Yusuke; Mizoue, Nobuya; Yahara, Tetsukazu

    2015-02-19

    Ecological communities including tropical rainforest are rapidly changing under various disturbances caused by increasing human activities. Recently in Cambodia, illegal logging and clear-felling for agriculture have been increasing. Here, we study the effects of logging, mortality and recruitment of plot trees on phylogenetic community structure in 32 plots in Kampong Thom, Cambodia. Each plot was 0.25 ha; 28 plots were established in primary evergreen forests and four were established in secondary dry deciduous forests. Measurements were made in 1998, 2000, 2004 and 2010, and logging, recruitment and mortality of each tree were recorded. We estimated phylogeny using rbcL and matK gene sequences and quantified phylogenetic α and β diversity. Within communities, logging decreased phylogenetic diversity, and increased overall phylogenetic clustering and terminal phylogenetic evenness. Between communities, logging increased phylogenetic similarity between evergreen and deciduous plots. On the other hand, recruitment had opposite effects both within and between communities. The observed patterns can be explained by environmental homogenization under logging. Logging is biased to particular species and larger diameter at breast height, and forest patrol has been effective in decreasing logging. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  6. Configuration Management File Manager Developed for Numerical Propulsion System Simulation

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.

    1997-01-01

    One of the objectives of the High Performance Computing and Communication Project's (HPCCP) Numerical Propulsion System Simulation (NPSS) is to provide a common and consistent way to manage applications, data, and engine simulations. The NPSS Configuration Management (CM) File Manager integrated with the Common Desktop Environment (CDE) window management system provides a common look and feel for the configuration management of data, applications, and engine simulations for U.S. engine companies. In addition, CM File Manager provides tools to manage a simulation. Features include managing input files, output files, textual notes, and any other material normally associated with simulation. The CM File Manager includes a generic configuration management Application Program Interface (API) that can be adapted for the configuration management repositories of any U.S. engine company.

  7. Balloon logging with the inverted skyline

    NASA Technical Reports Server (NTRS)

    Mosher, C. F.

    1975-01-01

    There is a gap in aerial logging techniques that has to be filled. The need for a simple, safe, sizeable system has to be developed before aerial logging will become effective and accepted in the logging industry. This paper presents such a system designed on simple principles with realistic cost and ecological benefits.

  8. Comparison of planted soil infiltration systems for treatment of log yard runoff.

    PubMed

    Hedmark, Asa; Scholz, Miklas; Aronsson, Par; Elowson, Torbjorn

    2010-07-01

    Treatment of log yard runoff is required to avoid contamination of receiving watercourses. The research aim was to assess if infiltration of log yard runoff through planted soil systems is successful and if different plant species affect the treatment performance at a field-scale experimental site in Sweden (2005 to 2007). Contaminated runoff from the log yard of a sawmill was infiltrated through soil planted with Alnus glutinosa (L.) Gärtner (common alder), Salix schwerinii X viminalis (willow variety "Gudrun"), Lolium perenne (L.) (rye grass), and Phalaris arundinacea (L.) (reed canary grass). The study concluded that there were no treatment differences when comparing the four different plants with each other, and there also were no differences between the tree and the grass species. Furthermore, the infiltration treatment was effective in reducing total organic carbon (55%) and total phosphorus (45%) concentrations in the runoff, even when the loads on the infiltration system increased from year to year.

  9. Comparative evaluation of effect of rotary and reciprocating single-file systems on pericervical dentin: A cone-beam computed tomography study.

    PubMed

    Zinge, Priyanka Ramdas; Patil, Jayaprakash

    2017-01-01

    The aim of this study is to evaluate and compare the effect of one shape, Neolix rotary single-file systems and WaveOne, Reciproc reciprocating single-file systems on pericervical dentin (PCD) using cone-beam computed tomography (CBCT). A total of 40 freshly extracted mandibular premolars were collected and divided into two groups, namely, Group A - Rotary: A 1 - Neolix and A 2 - OneShape and Group B - Reciprocating: B 1 - WaveOne and B 2 - Reciproc. Preoperative scans of each were taken followed by conventional access cavity preparation and working length determination with 10-k file. Instrumentation of the canal was done according to the respective file system, and postinstrumentation CBCT scans of teeth were obtained. 90 μm thick slices were obtained 4 mm apical and coronal to the cementoenamel junction. The PCD thickness was calculated as the shortest distance from the canal outline to the closest adjacent root surface, which was measured in four surfaces, i.e., facial, lingual, mesial, and distal for all the groups in the two obtained scans. There was no significant difference found between rotary single-file systems and reciprocating single-file systems in their effect on PCD, but in Group B 2 , there was most significant loss of tooth structure in the mesial, lingual, and distal surface ( P < 0.05). Reciproc single-file system removes more PCD as compared to other experimental groups, whereas Neolix single file system had the least effect on PCD.

  10. The Spider Center Wide File System; From Concept to Reality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shipman, Galen M; Dillow, David A; Oral, H Sarp

    2009-01-01

    The Leadership Computing Facility (LCF) at Oak Ridge National Laboratory (ORNL) has a diverse portfolio of computational resources ranging from a petascale XT4/XT5 simulation system (Jaguar) to numerous other systems supporting development, visualization, and data analytics. In order to support vastly different I/O needs of these systems Spider, a Lustre-based center wide file system was designed and deployed to provide over 240 GB/s of aggregate throughput with over 10 Petabytes of formatted capacity. A multi-stage InfiniBand network, dubbed as Scalable I/O Network (SION), with over 889 GB/s of bisectional bandwidth was deployed as part of Spider to provide connectivity tomore » our simulation, development, visualization, and other platforms. To our knowledge, while writing this paper, Spider is the largest and fastest POSIX-compliant parallel file system in production. This paper will detail the overall architecture of the Spider system, challenges in deploying and initial testings of a file system of this scale, and novel solutions to these challenges which offer key insights into file system design in the future.« less

  11. 77 FR 43592 - System Energy Resources, Inc.; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-25

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. EL12-52-001] System Energy Resources, Inc.; Notice of Filing Take notice that on July 18, 2012, System Energy Resources, Inc. (System Energy Resources), submitted a supplement to its petition filed on March 28, 2012 (March 28 petition...

  12. Recovery of forest structure and spectral properties after selective logging in lowland Bolivia.

    PubMed

    Broadbent, Eben N; Zarin, Daniel J; Asner, Gregory P; Peña-Claros, Marielos; Cooper, Amanda; Littell, Ramon

    2006-06-01

    Effective monitoring of selective logging from remotely sensed data requires an understanding of the spatial and temporal thresholds that constrain the utility of those data, as well as the structural and ecological characteristics of forest disturbances that are responsible for those constraints. Here we assess those thresholds and characteristics within the context of selective logging in the Bolivian Amazon. Our study combined field measurements of the spatial and temporal dynamics of felling gaps and skid trails ranging from <1 to 19 months following reduced-impact logging in a forest in lowland Bolivia with remote-sensing measurements from simultaneous monthly ASTER satellite overpasses. A probabilistic spectral mixture model (AutoMCU) was used to derive per-pixel fractional cover estimates of photosynthetic vegetation (PV), non-photosynthetic vegetation (NPV), and soil. Results were compared with the normalized difference in vegetation index (NDVI). The forest studied had considerably lower basal area and harvest volumes than logged sites in the Brazilian Amazon where similar remote-sensing analyses have been performed. Nonetheless, individual felling-gap area was positively correlated with canopy openness, percentage liana coverage, rates of vegetation regrowth, and height of remnant NPV. Both liana growth and NPV occurred primarily in the crown zone of the felling gap, whereas exposed soil was limited to the trunk zone of the gap. In felling gaps >400 m2, NDVI, and the PV and NPV fractions, were distinguishable from unlogged forest values for up to six months after logging; felling gaps <400 m2 were distinguishable for up to three months after harvest, but we were entirely unable to distinguish skid trails from our analysis of the spectral data.

  13. Storing files in a parallel computing system based on user-specified parser function

    DOEpatents

    Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Manzanares, Adam; Torres, Aaron

    2014-10-21

    Techniques are provided for storing files in a parallel computing system based on a user-specified parser function. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a parser from the distributed application for processing the plurality of files prior to storage; and storing one or more of the plurality of files in one or more storage nodes of the parallel computing system based on the processing by the parser. The plurality of files comprise one or more of a plurality of complete files and a plurality of sub-files. The parser can optionally store only those files that satisfy one or more semantic requirements of the parser. The parser can also extract metadata from one or more of the files and the extracted metadata can be stored with one or more of the plurality of files and used for searching for files.

  14. Money-center structures in dynamic banking systems

    NASA Astrophysics Data System (ADS)

    Li, Shouwei; Zhang, Minghui

    2016-10-01

    In this paper, we propose a dynamic model for banking systems based on the description of balance sheets. It generates some features identified through empirical analysis. Through simulation analysis of the model, we find that banking systems have the feature of money-center structures, that bank asset distributions are power-law distributions, and that contract size distributions are log-normal distributions.

  15. 78 FR 34371 - Longfellow Wind, LLC: Supplemental Notice That Initial Market-Based Rate Filing Includes Request...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-07

    .... Select the eFiling link to log on and submit the intervention or protests. Persons unable to file... to protest should file with the Federal Energy Regulatory Commission, 888 First Street NE... facilitate electronic service, persons with Internet access who will eFile a document and/or be listed as a...

  16. Evaluation of a new filing system's ability to maintain canal morphology.

    PubMed

    Thompson, Matthew; Sidow, Stephanie J; Lindsey, Kimberly; Chuang, Augustine; McPherson, James C

    2014-06-01

    The manufacturer of the Hyflex CM endodontic files claims the files remain centered within the canal, and if unwound during treatment, they will regain their original shape after sterilization. The purpose of this study was to evaluate and compare the canal centering ability of the Hyflex CM and the ProFile ISO filing systems after repeated uses in simulated canals, followed by autoclaving. Sixty acrylic blocks with a canal curvature of 45° were stained with methylene blue, photographed, and divided into 2 groups, H (Hyflex CM) and P (ProFile ISO). The groups were further subdivided into 3 subgroups: H1, H2, H3; P1, P2, P3 (n = 10). Groups H1 and P1 were instrumented to 40 (.04) with the respective file system. Used files were autoclaved for 26 minutes at 126°C. After sterilization, the files were used to instrument groups H2 and P2. The same sterilization and instrumentation procedure was repeated for groups H3 and P3. Post-instrumentation digital images were taken and superimposed over the pre-instrumentation images. Changes in the location of the center of the canal at predetermined reference points were recorded and compared within subgroups and between filing systems. Statistical differences in intergroup and intragroup transportation measures were analyzed by using the Kruskal-Wallis analysis of variance of ranks with the Bonferroni post hoc test. There was a difference between Hyflex CM and ProFile ISO groups, although it was not statistically significant. Intragroup differences for both Hyflex CM and ProFile ISO groups were not significant (P < .05). The Hyflex CM and ProFile ISO files equally maintained the original canal's morphology after 2 sterilization cycles. Published by Elsevier Inc.

  17. Construction of the radiation oncology teaching files system for charged particle radiotherapy.

    PubMed

    Masami, Mukai; Yutaka, Ando; Yasuo, Okuda; Naoto, Takahashi; Yoshihisa, Yoda; Hiroshi, Tsuji; Tadashi, Kamada

    2013-01-01

    Our hospital started the charged particle therapy since 1996. New institutions for charged particle therapy are planned in the world. Our hospital are accepting many visitors from those newly planned medical institutions and having many opportunities to provide with the training to them. Based upon our experiences, we have developed the radiation oncology teaching files system for charged particle therapy. We adopted the PowerPoint of Microsoft as a basic framework of our teaching files system. By using our export function of the viewer any physician can create teaching files easily and effectively. Now our teaching file system has 33 cases for clinical and physics contents. We expect that we can improve the safety and accuracy of charged particle therapy by using our teaching files system substantially.

  18. 78 FR 54888 - Guzman Power Markets, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-06

    ... the eFiling link to log on and submit the intervention or protests. Persons unable to file... protest should file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC... electronic service, persons with Internet access who will eFile a document and/or be listed as a contact for...

  19. 78 FR 28835 - Salton Sea Power Generation Company; Supplemental Notice That Initial Market-Based Rate Filing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-16

    .... Select the eFiling link to log on and submit the intervention or protests. Persons unable to file... intervene or to protest should file with the Federal Energy Regulatory Commission, 888 First Street NE... facilitate electronic service, persons with Internet access who will eFile a document and/or be listed as a...

  20. Life cycle performances of log wood applied for soil bioengineering constructions

    NASA Astrophysics Data System (ADS)

    Kalny, Gerda; Strauss-Sieberth, Alexandra; Strauss, Alfred; Rauch, Hans Peter

    2016-04-01

    Nowadays there is a high demand on engineering solutions considering not only technical aspects but also ecological and aesthetic values. Soil bioengineering is a construction technique that uses biological components for hydraulic and civil engineering solutions. Soil bioengineering solutions are based on the application of living plants and other auxiliary materials including among others log wood. This kind of construction material supports the soil bioengineering system as long as the plants as living construction material overtake the stability function. Therefore it is important to know about the durability and the degradation process of the wooden logs to retain the integral performance of a soil bio engineering system. These aspects will be considered within the framework of the interdisciplinary research project „ELWIRA Plants, wood, steel and concrete - life cycle performances as construction materials". Therefore field investigations on soil bioengineering construction material, specifically European Larch wood logs, of different soil bioengineering structures at the river Wien have been conducted. The drilling resistance as a parameter for particular material characteristics of selected logs was measured and analysed. The drilling resistance was measured with a Rinntech Resistograph instrument at different positions of the wooden logs, all surrounded with three different backfills: Fully surrounded with air, with earth contact on one side and near the water surface in wet-dry conditions. The age of the used logs ranges from one year old up to 20 year old. Results show progress of the drilling resistance throughout the whole cross section as an indicator to assess soil bioengineering construction material. Logs surrounded by air showed a higher drilling resistance than logs with earth contact and the ones exposed to wet-dry conditions. Hence the functional capability of wooden logs were analysed and discussed in terms of different levels of degradation

  1. An analysis of image storage systems for scalable training of deep neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Young, Steven R; Patton, Robert M

    This study presents a principled empirical evaluation of image storage systems for training deep neural networks. We employ the Caffe deep learning framework to train neural network models for three different data sets, MNIST, CIFAR-10, and ImageNet. While training the models, we evaluate five different options to retrieve training image data: (1) PNG-formatted image files on local file system; (2) pushing pixel arrays from image files into a single HDF5 file on local file system; (3) in-memory arrays to hold the pixel arrays in Python and C++; (4) loading the training data into LevelDB, a log-structured merge tree based key-valuemore » storage; and (5) loading the training data into LMDB, a B+tree based key-value storage. The experimental results quantitatively highlight the disadvantage of using normal image files on local file systems to train deep neural networks and demonstrate reliable performance with key-value storage based storage systems. When training a model on the ImageNet dataset, the image file option was more than 17 times slower than the key-value storage option. Along with measurements on training time, this study provides in-depth analysis on the cause of performance advantages/disadvantages of each back-end to train deep neural networks. We envision the provided measurements and analysis will shed light on the optimal way to architect systems for training neural networks in a scalable manner.« less

  2. Effect of selective logging on genetic diversity and gene flow in Cariniana legalis sampled from a cacao agroforestry system.

    PubMed

    Leal, J B; Santos, R P; Gaiotto, F A

    2014-01-28

    The fragments of the Atlantic Forest of southern Bahia have a long history of intense logging and selective cutting. Some tree species, such as jequitibá rosa (Cariniana legalis), have experienced a reduction in their populations with respect to both area and density. To evaluate the possible effects of selective logging on genetic diversity, gene flow, and spatial genetic structure, 51 C. legalis individuals were sampled, representing the total remaining population from the cacao agroforestry system. A total of 120 alleles were observed from the 11 microsatellite loci analyzed. The average observed heterozygosity (0.486) was less than the expected heterozygosity (0.721), indicating a loss of genetic diversity in this population. A high fixation index (FIS = 0.325) was found, which is possibly due to a reduction in population size, resulting in increased mating among relatives. The maximum (1055 m) and minimum (0.095 m) distances traveled by pollen or seeds were inferred based on paternity tests. We found 36.84% of unique parents among all sampled seedlings. The progenitors of the remaining seedlings (63.16%) were most likely out of the sampled area. Positive and significant spatial genetic structure was identified in this population among classes 10 to 30 m away with an average coancestry coefficient between pairs of individuals of 0.12. These results suggest that the agroforestry system of cacao cultivation is contributing to maintaining levels of diversity and gene flow in the studied population, thus minimizing the effects of selective logging.

  3. [PVFS 2000: An operational parallel file system for Beowulf

    NASA Technical Reports Server (NTRS)

    Ligon, Walt

    2004-01-01

    The approach has been to develop Parallel Virtual File System version 2 (PVFS2) , retaining the basic philosophy of the original file system but completely rewriting the code. It shows the architecture of the server and client components. BMI - BMI is the network abstraction layer. It is designed with a common driver and modules for each protocol supported. The interface is non-blocking, and provides mechanisms for optimizations including pinning user buffers. Currently TCP/IP and GM(Myrinet) modules have been implemented. Trove -Trove is the storage abstraction layer. It provides for storing both data spaces and name/value pairs. Trove can also be implemented using different underlying storage mechanisms including native files, raw disk partitions, SQL and other databases. The current implementation uses native files for data spaces and Berkeley db for name/value pairs.

  4. Development of a 3D log sawing optimization system for small sawmills in central Appalachia, US

    Treesearch

    Wenshu Lin; Jingxin Wang; Edward Thomas

    2011-01-01

    A 3D log sawing optimization system was developed to perform log generation, opening face determination, sawing simulation, and lumber grading using 3D modeling techniques. Heuristic and dynamic programming algorithms were used to determine opening face and grade sawing optimization. Positions and shapes of internal log defects were predicted using a model developed by...

  5. Integration of QR codes into an anesthesia information management system for resident case log management.

    PubMed

    Avidan, Alexander; Weissman, Charles; Levin, Phillip D

    2015-04-01

    Quick response (QR) codes containing anesthesia syllabus data were introduced into an anesthesia information management system. The code was generated automatically at the conclusion of each case and available for resident case logging using a smartphone or tablet. The goal of this study was to evaluate the use and usability/user-friendliness of such system. Resident case logging practices were assessed prior to introducing the QR codes. QR code use and satisfactions amongst residents was reassessed at three and six months. Before QR code introduction only 12/23 (52.2%) residents maintained a case log. Most of the remaining residents (9/23, 39.1%) expected to receive a case list from the anesthesia information management system database at the end of their residency. At three months and six months 17/26 (65.4%) and 15/25 (60.0%) residents, respectively, were using the QR codes. Satisfaction was rated as very good or good. QR codes for residents' case logging with smartphones or tablets were successfully introduced in an anesthesia information management system and used by most residents. QR codes can be successfully implemented into medical practice to support data transfer. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. The SACLANTCEN Shallow-Water Transmission-Loss Data-Filing System.

    DTIC Science & Technology

    1980-10-01

    HASTRUP , T AKAL, A PARISOTTO JNCLASSIFIED SACLANTCEN-SM-141 NL SEMEN SACLANTCEN Memorandum U RESEARCH CENTRE- MEMORANDUM THE SACLANTCEN SHALLOW-WATER...TRAN SMISSION-LOSS DATA-FILING SYSTEM by OLE F. HASTRUP , TUNCAY AKAL, ARTURO PARISOTTO I OCTOBER 1980 . ATLANTIC TREATY LA SPEZIA, ITALY ORGANIZATION...WATER TRANSMISSION-LOSS DATA-FILING SYSTEM, Ol1e F./ Hastrup Y/Akal Arturo/Parisotto/ This memorandum has been prepared within the SACLANTCEN

  7. CLARET user's manual: Mainframe Logs. Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frobose, R.H.

    1984-11-12

    CLARET (Computer Logging and RETrieval) is a stand-alone PDP 11/23 system that can support 16 terminals. It provides a forms-oriented front end by which operators enter online activity logs for the Lawrence Livermore National Laboratory's OCTOPUS computer network. The logs are stored on the PDP 11/23 disks for later retrieval, and hardcopy reports are generated both automatically and upon request. Online viewing of the current logs is provided to management. As each day's logs are completed, the information is automatically sent to a CRAY and included in an online database system. The terminal used for the CLARET system is amore » dual-port Hewlett Packard 2626 terminal that can be used as either the CLARET logging station or as an independent OCTOPUS terminal. Because this is a stand-alone system, it does not depend on the availability of the OCTOPUS network to run and, in the event of a power failure, can be brought up independently.« less

  8. Prediction of Compressional, Shear, and Stoneley Wave Velocities from Conventional Well Log Data Using a Committee Machine with Intelligent Systems

    NASA Astrophysics Data System (ADS)

    Asoodeh, Mojtaba; Bagheripour, Parisa

    2012-01-01

    Measurement of compressional, shear, and Stoneley wave velocities, carried out by dipole sonic imager (DSI) logs, provides invaluable data in geophysical interpretation, geomechanical studies and hydrocarbon reservoir characterization. The presented study proposes an improved methodology for making a quantitative formulation between conventional well logs and sonic wave velocities. First, sonic wave velocities were predicted from conventional well logs using artificial neural network, fuzzy logic, and neuro-fuzzy algorithms. Subsequently, a committee machine with intelligent systems was constructed by virtue of hybrid genetic algorithm-pattern search technique while outputs of artificial neural network, fuzzy logic and neuro-fuzzy models were used as inputs of the committee machine. It is capable of improving the accuracy of final prediction through integrating the outputs of aforementioned intelligent systems. The hybrid genetic algorithm-pattern search tool, embodied in the structure of committee machine, assigns a weight factor to each individual intelligent system, indicating its involvement in overall prediction of DSI parameters. This methodology was implemented in Asmari formation, which is the major carbonate reservoir rock of Iranian oil field. A group of 1,640 data points was used to construct the intelligent model, and a group of 800 data points was employed to assess the reliability of the proposed model. The results showed that the committee machine with intelligent systems performed more effectively compared with individual intelligent systems performing alone.

  9. Grading options for western hemlock "pulpwood" logs from southeastern Alaska.

    Treesearch

    David W. Green; Kent A. McDonald; John Dramm; Kenneth Kilborn

    Properties and grade yield are estimated for structural lumber produced from No. 3, No. 4, and low-end No. 2 grade western hemlock logs of the type previously used primarily for the production of pulp chips. Estimates are given for production in the Structural Framing, Machine Stress Rating, and Laminating Stock grading systems. The information shows that significant...

  10. Lessons Learned in Deploying the World s Largest Scale Lustre File System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dillow, David A; Fuller, Douglas; Wang, Feiyi

    2010-01-01

    The Spider system at the Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) is the world's largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF's diverse computational environment, the project had a number of ambitious goals. To support the workloads of the OLCF's diverse computational platforms, the aggregate performance and storage capacity of Spider exceed that of our previously deployed systems by a factor of 6x - 240 GB/sec, and 17x - 10 Petabytes, respectively. Furthermore, Spider supports over 26,000 clients concurrently accessing themore » file system, which exceeds our previously deployed systems by nearly 4x. In addition to these scalability challenges, moving to a center-wide shared file system required dramatically improved resiliency and fault-tolerance mechanisms. This paper details our efforts in designing, deploying, and operating Spider. Through a phased approach of research and development, prototyping, deployment, and transition to operations, this work has resulted in a number of insights into large-scale parallel file system architectures, from both the design and the operational perspectives. We present in this paper our solutions to issues such as network congestion, performance baselining and evaluation, file system journaling overheads, and high availability in a system with tens of thousands of components. We also discuss areas of continued challenges, such as stressed metadata performance and the need for file system quality of service alongside with our efforts to address them. Finally, operational aspects of managing a system of this scale are discussed along with real-world data and observations.« less

  11. 78 FR 70299 - Capacity Markets Partners, LLC; Supplemental Notice That Initial Market-Based Rate Filing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-25

    ... to protest should file with the Federal Energy Regulatory Commission, 888 First Street NE... . To facilitate electronic service, persons with Internet access who will eFile a document and/or be...Registration link. Select the eFiling link to log on and submit the intervention or protests. Persons unable to...

  12. 78 FR 29366 - Wheelabrator Baltimore, LP; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-20

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... protest should file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  13. 78 FR 28833 - Ebensburg Power Company; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-16

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... protest should file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  14. 78 FR 68052 - Covanta Haverhill Association, LP; Supplemental Notice That Initial Market-Based Rate Filing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-13

    ... to protest should file with the Federal Energy Regulatory Commission, 888 First Street NE... . To facilitate electronic service, persons with Internet access who will eFile a document and/or be...Registration link. Select the eFiling link to log on and submit the intervention or protests. Persons unable to...

  15. 78 FR 52913 - Allegany Generating Station LLC; Supplemental Notice That Initial Market-Based Rate Filing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-27

    ... to protest should file with the Federal Energy Regulatory Commission, 888 First Street NE... . To facilitate electronic service, persons with Internet access who will eFile a document and/or be...Registration link. Select the eFiling link to log on and submit the intervention or protests. Persons unable to...

  16. Patterns of usage for a Web-based clinical information system.

    PubMed

    Chen, Elizabeth S; Cimino, James J

    2004-01-01

    Understanding how clinicians are using clinical information systems to assist with their everyday tasks is valuable to the system design and development process. Developers of such systems are interested in monitoring usage in order to make enhancements. System log files are rich resources for gaining knowledge about how the system is being used. We have analyzed the log files of our Web-based clinical information system (WebCIS) to obtain various usage statistics including which WebCIS features are frequently being used. We have also identified usage patterns, which convey how the user is traversing the system. We present our method and these results as well as describe how the results can be used to customize menus, shortcut lists, and patient reports in WebCIS and similar systems.

  17. Characterizing structures on borehole images and logging data of the Nankai trough accretionary prism: new insights

    NASA Astrophysics Data System (ADS)

    Jurado, Maria Jose

    2016-04-01

    IODP has extensively used the D/V Chikyu to drill the Kumano portion of the Nankai Trough, including two well sites within the Kumano Basin. IODP Expeditions 338 and 348 drilled deep into the inner accretionary prism south of the Kii Peninsula collecting a suite of LWD data, including natural gamma ray, electrical resistivity logs and borehole images, suitable to characterize structures (fractures and faults) inside the accretionary prism. Structural interpretation and analysis of logging-while-drilling data in the deep inner prism revealed intense deformation of a generally homogenous lithology characterized by bedding that dips steeply (60-90°) to the NW, intersected by faults and fractures. Multiple phases of deformation are characterized. IODP Expedition borehole images and LWD data acquired in the last decade in previous and results of NantroSEIZE IODP Expeditions (314, 319) were also analyzed to investigate the internal geometries and structures of the Nankai Trough accretionary prism. This study focused mainly on the characterization of the different types of structures and their specific position within the accretionary prism structures. New structural constraints and methodologies as well as a new approach to the characterization of study of active structures inside the prism will be presented.

  18. MMTF-An efficient file format for the transmission, visualization, and analysis of macromolecular structures.

    PubMed

    Bradley, Anthony R; Rose, Alexander S; Pavelka, Antonín; Valasatava, Yana; Duarte, Jose M; Prlić, Andreas; Rose, Peter W

    2017-06-01

    Recent advances in experimental techniques have led to a rapid growth in complexity, size, and number of macromolecular structures that are made available through the Protein Data Bank. This creates a challenge for macromolecular visualization and analysis. Macromolecular structure files, such as PDB or PDBx/mmCIF files can be slow to transfer, parse, and hard to incorporate into third-party software tools. Here, we present a new binary and compressed data representation, the MacroMolecular Transmission Format, MMTF, as well as software implementations in several languages that have been developed around it, which address these issues. We describe the new format and its APIs and demonstrate that it is several times faster to parse, and about a quarter of the file size of the current standard format, PDBx/mmCIF. As a consequence of the new data representation, it is now possible to visualize structures with millions of atoms in a web browser, keep the whole PDB archive in memory or parse it within few minutes on average computers, which opens up a new way of thinking how to design and implement efficient algorithms in structural bioinformatics. The PDB archive is available in MMTF file format through web services and data that are updated on a weekly basis.

  19. MMTF—An efficient file format for the transmission, visualization, and analysis of macromolecular structures

    PubMed Central

    Pavelka, Antonín; Valasatava, Yana; Prlić, Andreas

    2017-01-01

    Recent advances in experimental techniques have led to a rapid growth in complexity, size, and number of macromolecular structures that are made available through the Protein Data Bank. This creates a challenge for macromolecular visualization and analysis. Macromolecular structure files, such as PDB or PDBx/mmCIF files can be slow to transfer, parse, and hard to incorporate into third-party software tools. Here, we present a new binary and compressed data representation, the MacroMolecular Transmission Format, MMTF, as well as software implementations in several languages that have been developed around it, which address these issues. We describe the new format and its APIs and demonstrate that it is several times faster to parse, and about a quarter of the file size of the current standard format, PDBx/mmCIF. As a consequence of the new data representation, it is now possible to visualize structures with millions of atoms in a web browser, keep the whole PDB archive in memory or parse it within few minutes on average computers, which opens up a new way of thinking how to design and implement efficient algorithms in structural bioinformatics. The PDB archive is available in MMTF file format through web services and data that are updated on a weekly basis. PMID:28574982

  20. ATLAS, an integrated structural analysis and design system. Volume 2: System design document

    NASA Technical Reports Server (NTRS)

    Erickson, W. J. (Editor)

    1979-01-01

    ATLAS is a structural analysis and design system, operational on the Control Data Corporation 6600/CYBER computers. The overall system design, the design of the individual program modules, and the routines in the ATLAS system library are described. The overall design is discussed in terms of system architecture, executive function, data base structure, user program interfaces and operational procedures. The program module sections include detailed code description, common block usage and random access file usage. The description of the ATLAS program library includes all information needed to use these general purpose routines.

  1. Variability in Accreditation Council for Graduate Medical Education Resident Case Log System practices among orthopaedic surgery residents.

    PubMed

    Salazar, Dane; Schiff, Adam; Mitchell, Erika; Hopkinson, William

    2014-02-05

    The Accreditation Council for Graduate Medical Education (ACGME) Resident Case Log System is designed to be a reflection of residents' operative volume and an objective measure of their surgical experience. All operative procedures and manipulations in the operating room, Emergency Department, and outpatient clinic are to be logged into the Resident Case Log System. Discrepancies in the log volumes between residents and residency programs often prompt scrutiny. However, it remains unclear if such disparities truly represent differences in operative experiences or if they are reflections of inconsistent logging practices. The purpose of this study was to investigate individual recording practices among orthopaedic surgery residents prior to August 1, 2011. Orthopaedic surgery residents received a questionnaire on case log practices that was distributed through the Council of Orthopaedic Residency Directors list server. Respondents were asked to respond anonymously about recording practices in different clinical settings as well as types of cases routinely logged. Hypothetical scenarios of common orthopaedic procedures were presented to investigate the differences in the Current Procedural Terminology codes utilized. Two hundred and ninety-eight orthopaedic surgery residents completed the questionnaire; 37% were fifth-year residents, 22% were fourth-year residents, 18% were third-year residents, 15% were second-year residents, and 8% were first-year residents. Fifty-six percent of respondents reported routinely logging procedures performed in the Emergency Department or urgent care setting. Twenty-two percent of participants routinely logged procedures in the clinic or outpatient setting, 20% logged joint injections, and only 13% logged casts or splints applied in the office setting. There was substantial variability in the Current Procedural Terminology codes selected for the seven clinical scenarios. There has been a lack of standardization in case-logging

  2. REPHLEX II: An information management system for the ARS Water Data Base

    NASA Astrophysics Data System (ADS)

    Thurman, Jane L.

    1993-08-01

    The REPHLEX II computer system is an on-line information management system which allows scientists, engineers, and other researchers to retrieve data from the ARS Water Data Base using asynchronous communications. The system features two phone lines handling baud rates from 300 to 2400, customized menus to facilitate browsing, help screens, direct access to information and data files, electronic mail processing, file transfers using the XMODEM protocol, and log-in procedures which capture information on new users, process passwords, and log activity for a permanent audit trail. The primary data base on the REPHLEX II system is the ARS Water Data Base which consists of rainfall and runoff data from experimental agricultural watersheds located in the United States.

  3. Stochastic Petri net analysis of a replicated file system

    NASA Technical Reports Server (NTRS)

    Bechta Dugan, Joanne; Ciardo, Gianfranco

    1989-01-01

    A stochastic Petri-net model of a replicated file system is presented for a distributed environment where replicated files reside on different hosts and a voting algorithm is used to maintain consistency. Witnesses, which simply record the status of the file but contain no data, can be used in addition to or in place of files to reduce overhead. A model sufficiently detailed to include file status (current or out-of-date), as well as failure and repair of hosts where copies or witnesses reside, is presented. The number of copies and witnesses is a parameter of the model. Two different majority protocols are examined, one where a majority of all copies and witnesses is necessary to form a quorum, and the other where only a majority of the copies and witnesses on operational hosts is needed. The latter, known as adaptive voting, is shown to increase file availability in most cases.

  4. A Log-Euclidean polyaffine registration for articulated structures in medical images.

    PubMed

    Martín-Fernández, Miguel Angel; Martín-Fernández, Marcos; Alberola-López, Carlos

    2009-01-01

    In this paper we generalize the Log-Euclidean polyaffine registration framework of Arsigny et al. to deal with articulated structures. This framework has very useful properties as it guarantees the invertibility of smooth geometric transformations. In articulated registration a skeleton model is defined for rigid structures such as bones. The final transformation is affine for the bones and elastic for other tissues in the image. We extend the Arsigny el al.'s method to deal with locally-affine registration of pairs of wires. This enables the possibility of using this registration framework to deal with articulated structures. In this context, the design of the weighting functions, which merge the affine transformations defined for each pair of wires, has a great impact not only on the final result of the registration algorithm, but also on the invertibility of the global elastic transformation. Several experiments, using both synthetic images and hand radiographs, are also presented.

  5. Formalizing structured file services for the data storage and retrieval subsystem of the data management system for Spacestation Freedom

    NASA Technical Reports Server (NTRS)

    Jamsek, Damir A.

    1993-01-01

    A brief example of the use of formal methods techniques in the specification of a software system is presented. The report is part of a larger effort targeted at defining a formal methods pilot project for NASA. One possible application domain that may be used to demonstrate the effective use of formal methods techniques within the NASA environment is presented. It is not intended to provide a tutorial on either formal methods techniques or the application being addressed. It should, however, provide an indication that the application being considered is suitable for a formal methods by showing how such a task may be started. The particular system being addressed is the Structured File Services (SFS), which is a part of the Data Storage and Retrieval Subsystem (DSAR), which in turn is part of the Data Management System (DMS) onboard Spacestation Freedom. This is a software system that is currently under development for NASA. An informal mathematical development is presented. Section 3 contains the same development using Penelope (23), an Ada specification and verification system. The complete text of the English version Software Requirements Specification (SRS) is reproduced in Appendix A.

  6. Generalized File Management System or Proto-DBMS?

    ERIC Educational Resources Information Center

    Braniff, Tom

    1979-01-01

    The use of a data base management system (DBMS) as opposed to traditional data processing is discussed. The generalized file concept is viewed as an entry level step to the DBMS. The transition process from one system to the other is detailed. (SF)

  7. Evaluation of canal transportation after preparation with Reciproc single-file systems with or without glide path files.

    PubMed

    Aydin, Ugur; Karataslioglu, Emrah

    2017-01-01

    Canal transportation is a common sequel caused by rotary instruments. The purpose of the present study is to evaluate the degree of transportation after the use of Reciproc single-file instruments with or without glide path files. Thirty resin blocks with L-shaped canals were divided into three groups ( n = 10). Group 1 - canals were prepared with Reciproc-25 file. Group 2 - glide path file-G1 was used before Reciproc. Group 3 - glide path files-G1 and G2 were used before Reciproc. Pre- and post-instrumentation images were superimposed under microscope, and resin removed from the inner and outer surfaces of the root canal was calculated throughout 10 points. Statistical analysis was performed with Kruskal-Wallis test and post hoc Dunn test. For coronal and middle one-thirds, there was no significant difference among groups ( P > 0.05). For apical section, transportation of Group 1 was significantly higher than other groups ( P < 0.05). Using glide path files before Reciproc single-file system reduced the degree of apical canal transportation.

  8. Challenges in converting among log scaling methods.

    Treesearch

    Henry Spelter

    2003-01-01

    The traditional method of measuring log volume in North America is the board foot log scale, which uses simple assumptions about how much of a log's volume is recoverable. This underestimates the true recovery potential and leads to difficulties in comparing volumes measured with the traditional board foot system and those measured with the cubic scaling systems...

  9. 78 FR 59923 - Buffalo Dunes Wind Project, LLC; Supplemental Notice That Initial Market-Based Rate Filing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-30

    ... to protest should file with the Federal Energy Regulatory Commission, 888 First Street NE... . To facilitate electronic service, persons with Internet access who will eFile a document and/or be...Registration link. Select the eFiling link to log on and submit the intervention or protests. Persons unable to...

  10. 78 FR 28833 - Lighthouse Energy Group, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-16

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... protest should file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  11. 77 FR 64978 - Sunbury Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-24

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426, in... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  12. 78 FR 62300 - Burgess Biopower LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-15

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426, in... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  13. 78 FR 75561 - South Bay Energy Corp.; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-12

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426, in... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  14. 78 FR 72673 - Yellow Jacket Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-03

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... protest should file with the Federal Energy Regulatory Commission, 888 First Street, NE., Washington, DC... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  15. 78 FR 44557 - Guttman Energy Inc.; Supplemental Notice That Initial Market-Based Rate Filing Includes Request...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-24

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426, in... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  16. 78 FR 49506 - Source Power & Gas LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-14

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426, in... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  17. 77 FR 64980 - Noble Americas Energy Solutions LLC; Supplemental Notice That Initial Market-Based Rate Filing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-24

    ... intervene or to protest should file with the Federal Energy Regulatory Commission, 888 First Street NE...://www.ferc.gov . To facilitate electronic service, persons with Internet access who will eFile a... using the eRegistration link. Select the eFiling link to log on and submit the intervention or protests...

  18. 78 FR 46939 - DWP Energy Holdings, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-02

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426, in... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  19. 78 FR 28833 - CE Leathers Company; Supplemental Notice That Initial Market-Based Rate Filing Includes Request...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-16

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426, in... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  20. 78 FR 59014 - Lakeswind Power Partners, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-25

    ... to protest should file with the Federal Energy Regulatory Commission, 888 First Street, NE... . To facilitate electronic service, persons with Internet access who will eFile a document and/or be...Registration link. Select the eFiling link to log on and submit the intervention or protests. Persons unable to...

  1. 78 FR 75560 - Green Current Solutions, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-12

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... protest should file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  2. 77 FR 64980 - Collegiate Clean Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-24

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... protest should file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  3. 77 FR 64977 - Frontier Utilities New York LLC; Supplemental Notice That Initial Market-Based Rate Filing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-24

    ... to protest should file with the Federal Energy Regulatory Commission, 888 First Street NE... . To facilitate electronic service, persons with Internet access who will eFile a document and/or be...Registration link. Select the eFiling link to log on and submit the intervention or protests. Persons unable to...

  4. 78 FR 62299 - West Deptford Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-15

    ... protest should file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC... . To facilitate electronic service, persons with Internet access who will eFile a document and/or be...Registration link. Select the eFiling link to log on and submit the intervention or protests. Persons unable to...

  5. Screw-in forces during instrumentation by various file systems.

    PubMed

    Ha, Jung-Hong; Kwak, Sang Won; Kim, Sung-Kyo; Kim, Hyeon-Cheol

    2016-11-01

    The purpose of this study was to compare the maximum screw-in forces generated during the movement of various Nickel-Titanium (NiTi) file systems. Forty simulated canals in resin blocks were randomly divided into 4 groups for the following instruments: Mtwo size 25/0.07 (MTW, VDW GmbH), Reciproc R25 (RPR, VDW GmbH), ProTaper Universal F2 (PTU, Dentsply Maillefer), and ProTaper Next X2 (PTN, Dentsply Maillefer, n = 10). All the artificial canals were prepared to obtain a standardized lumen by using ProTaper Universal F1. Screw-in forces were measured using a custom-made experimental device (AEndoS- k , DMJ system) during instrumentation with each NiTi file system using the designated movement. The rotation speed was set at 350 rpm with an automatic 4 mm pecking motion at a speed of 1 mm/sec. The pecking depth was increased by 1 mm for each pecking motion until the file reach the working length. Forces were recorded during file movement, and the maximum force was extracted from the data. Maximum screw-in forces were analyzed by one-way ANOVA and Tukey's post hoc comparison at a significance level of 95%. Reciproc and ProTaper Universal files generated the highest maximum screw-in forces among all the instruments while M-two and ProTaper Next showed the lowest ( p < 0.05). Geometrical differences rather than shaping motion and alloys may affect the screw-in force during canal instrumentation. To reduce screw-in forces, the use of NiTi files with smaller cross-sectional area for higher flexibility is recommended.

  6. Transparency in Distributed File Systems

    DTIC Science & Technology

    1989-01-01

    ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK Computer Science Department AREA & WORK UNIT NUMBERS 734 Comouter Studies Bldc . University of...sistency control , file and director) placement, and file and directory migration in a way that pro- 3 vides full network transparency. This transparency...areas of naming, replication, con- sistency control , file and directory placement, and file and directory migration in a way that pro- 3 vides full

  7. Beyond a Terabyte File System

    NASA Technical Reports Server (NTRS)

    Powers, Alan K.

    1994-01-01

    The Numerical Aerodynamics Simulation Facility's (NAS) CRAY C916/1024 accesses a "virtual" on-line file system, which is expanding beyond a terabyte of information. This paper will present some options to fine tuning Data Migration Facility (DMF) to stretch the online disk capacity and explore the transitions to newer devices (STK 4490, ER90, RAID).

  8. 5 CFR 293.504 - Composition of, and access to, the Employee Medical File System.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Employee Medical File System. 293.504 Section 293.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.504 Composition of, and access to, the Employee Medical File System. (a) All employee occupational medical records...

  9. 5 CFR 293.504 - Composition of, and access to, the Employee Medical File System.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Employee Medical File System. 293.504 Section 293.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.504 Composition of, and access to, the Employee Medical File System. (a) All employee occupational medical records...

  10. 5 CFR 293.504 - Composition of, and access to, the Employee Medical File System.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Employee Medical File System. 293.504 Section 293.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.504 Composition of, and access to, the Employee Medical File System. (a) All employee occupational medical records...

  11. 5 CFR 293.504 - Composition of, and access to, the Employee Medical File System.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Employee Medical File System. 293.504 Section 293.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.504 Composition of, and access to, the Employee Medical File System. (a) All employee occupational medical records...

  12. 5 CFR 293.504 - Composition of, and access to, the Employee Medical File System.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Employee Medical File System. 293.504 Section 293.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.504 Composition of, and access to, the Employee Medical File System. (a) All employee occupational medical records...

  13. Deploying Server-side File System Monitoring at NERSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uselton, Andrew

    2009-05-01

    The Franklin Cray XT4 at the NERSC center was equipped with the server-side I/O monitoring infrastructure Cerebro/LMT, which is described here in detail. Insights gained from the data produced include a better understanding of instantaneous data rates during file system testing, file system behavior during regular production time, and long-term average behaviors. Information and insights gleaned from this monitoring support efforts to proactively manage the I/O infrastructure on Franklin. A simple model for I/O transactions is introduced and compared with the 250 million observations sent to the LMT database from August 2008 to February 2009.

  14. Interactive effects of historical logging and fire exclusion on ponderosa pine forest structure in the northern Rockies.

    PubMed

    Naficy, Cameron; Sala, Anna; Keeling, Eric G; Graham, Jon; DeLuca, Thomas H

    2010-10-01

    Increased forest density resulting from decades of fire exclusion is often perceived as the leading cause of historically aberrant, severe, contemporary wildfires and insect outbreaks documented in some fire-prone forests of the western United States. Based on this notion, current U.S. forest policy directs managers to reduce stand density and restore historical conditions in fire-excluded forests to help minimize high-severity disturbances. Historical logging, however, has also caused widespread change in forest vegetation conditions, but its long-term effects on vegetation structure and composition have never been adequately quantified. We document that fire-excluded ponderosa pine forests of the northern Rocky Mountains logged prior to 1960 have much higher average stand density, greater homogeneity of stand structure, more standing dead trees and increased abundance of fire-intolerant trees than paired fire-excluded, unlogged counterparts. Notably, the magnitude of the interactive effect of fire exclusion and historical logging substantially exceeds the effects of fire exclusion alone. These differences suggest that historically logged sites are more prone to severe wildfires and insect outbreaks than unlogged, fire-excluded forests and should be considered a high priority for fuels reduction treatments. Furthermore, we propose that ponderosa pine forests with these distinct management histories likely require distinct restoration approaches. We also highlight potential long-term risks of mechanical stand manipulation in unlogged forests and emphasize the need for a long-term view of fuels management.

  15. Electronic Document Management Using Inverted Files System

    NASA Astrophysics Data System (ADS)

    Suhartono, Derwin; Setiawan, Erwin; Irwanto, Djon

    2014-03-01

    The amount of documents increases so fast. Those documents exist not only in a paper based but also in an electronic based. It can be seen from the data sample taken by the SpringerLink publisher in 2010, which showed an increase in the number of digital document collections from 2003 to mid of 2010. Then, how to manage them well becomes an important need. This paper describes a new method in managing documents called as inverted files system. Related with the electronic based document, the inverted files system will closely used in term of its usage to document so that it can be searched over the Internet using the Search Engine. It can improve document search mechanism and document save mechanism.

  16. A History of the Andrew File System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bashear, Derrick

    2011-02-22

    Derrick Brashear and Jeffrey Altman will present a technical history of the evolution of Andrew File System starting with the early days of the Andrew Project at Carnegie Mellon through the commercialization by Transarc Corporation and IBM and a decade of OpenAFS. The talk will be technical with a focus on the various decisions and implementation trade-offs that were made over the course of AFS versions 1 through 4, the development of the Distributed Computing Environment Distributed File System (DCE DFS), and the course of the OpenAFS development community. The speakers will also discuss the various AFS branches developed atmore » the University of Michigan, Massachusetts Institute of Technology and Carnegie Mellon University.« less

  17. Performance of a logging truck with a central tire inflation system.

    Treesearch

    John A. Sturos; Douglas B. Brumm; Andrew Lehto

    1995-01-01

    Describes the performance of an 11-axle logging truck with a central tire inflation system. Results included reduced damages to roads, improved ride of the truck, improved drawbar pull, and reduced rolling resistance. Road construction costs were reduced 62%, primarily due to using 33% less gravel.

  18. Exploring Online Students' Self-Regulated Learning with Self-Reported Surveys and Log Files: A Data Mining Approach

    ERIC Educational Resources Information Center

    Cho, Moon-Heum; Yoo, Jin Soung

    2017-01-01

    Many researchers who are interested in studying students' online self-regulated learning (SRL) have heavily relied on self-reported surveys. Data mining is an alternative technique that can be used to discover students' SRL patterns from large data logs saved on a course management system. The purpose of this study was to identify students' online…

  19. 29 CFR 4902.11 - Specific exemptions: Office of Inspector General Investigative File System.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Investigative File System. 4902.11 Section 4902.11 Labor Regulations Relating to Labor (Continued) PENSION... General Investigative File System. (a) Criminal Law Enforcement. (1) Exemption. Under the authority... Inspector General Investigative File System—PBGC” from the provisions of 5 U.S.C. 552a (c)(3), (c)(4), (d)(1...

  20. 29 CFR 4902.11 - Specific exemptions: Office of Inspector General Investigative File System.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Investigative File System. 4902.11 Section 4902.11 Labor Regulations Relating to Labor (Continued) PENSION... General Investigative File System. (a) Criminal Law Enforcement. (1) Exemption. Under the authority... Inspector General Investigative File System—PBGC” from the provisions of 5 U.S.C. 552a (c)(3), (c)(4), (d)(1...

  1. 29 CFR 4902.11 - Specific exemptions: Office of Inspector General Investigative File System.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Investigative File System. 4902.11 Section 4902.11 Labor Regulations Relating to Labor (Continued) PENSION... General Investigative File System. (a) Criminal Law Enforcement. (1) Exemption. Under the authority... Inspector General Investigative File System—PBGC” from the provisions of 5 U.S.C. 552a (c)(3), (c)(4), (d)(1...

  2. 29 CFR 4902.11 - Specific exemptions: Office of Inspector General Investigative File System.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Investigative File System. 4902.11 Section 4902.11 Labor Regulations Relating to Labor (Continued) PENSION... General Investigative File System. (a) Criminal Law Enforcement. (1) Exemption. Under the authority... Inspector General Investigative File System—PBGC” from the provisions of 5 U.S.C. 552a (c)(3), (c)(4), (d)(1...

  3. 29 CFR 4902.11 - Specific exemptions: Office of Inspector General Investigative File System.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Investigative File System. 4902.11 Section 4902.11 Labor Regulations Relating to Labor (Continued) PENSION... General Investigative File System. (a) Criminal Law Enforcement. (1) Exemption. Under the authority... Inspector General Investigative File System—PBGC” from the provisions of 5 U.S.C. 552a (c)(3), (c)(4), (d)(1...

  4. SU-E-T-100: Designing a QA Tool for Enhance Dynamic Wedges Based On Dynalog Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yousuf, A; Hussain, A

    2014-06-01

    Purpose: A robust quality assurance (QA) program for computer controlled enhanced dynamic wedge (EDW) has been designed and tested. Calculations to perform such QA test is based upon the EDW dynamic log files generated during dose delivery. Methods: Varian record and verify system generates dynamic log (dynalog) files during dynamic dose delivery. The system generated dynalog files contain information such as date and time of treatment, energy, monitor units, wedge orientation, and type of treatment. It also contains the expected calculated segmented treatment tables (STT) and the actual delivered STT for the treatment delivery as a verification record. These filesmore » can be used to assess the integrity and precision of the treatment plan delivery. The plans were delivered with a 6 MV beam from a Varian linear accelerator. For available EDW angles (10°, 15°, 20°, 25°, 30°, 45°, and 60°) Varian STT values were used to manually calculate monitor units for each segment. It can also be used to calculate the EDW factors. Independent verification of fractional MUs per segment was performed against those generated from dynalog files. The EDW factors used to calculate MUs in TPS were dosimetrically verified in solid water phantom with semiflex chamber on central axis. Results: EDW factors were generated from the STT provided by Varian and verified against practical measurements. The measurements were in agreement of the order of 1 % to the calculated EDW data. Variation between the MUs per segment obtained from dynalog files and those manually calculated was found to be less than 2%. Conclusion: An efficient and easy tool to perform routine QA procedure of EDW is suggested. The method can be easily implemented in any institution without a need for expensive QA equipment. An error of the order of ≥2% can be easily detected.« less

  5. Well 9-1 Logs and Data: Roosevelt Hot Spring Area, Utah (FORGE)

    DOE Data Explorer

    Joe Moore

    2016-03-03

    This is a compilation of logs and data from Well 9-1 in the Roosevelt Hot Springs area in Utah. This well is also in the Utah FORGE study area. The file is in a compressed .zip format and there is a data inventory table (Excel spreadsheet) in the root folder that is a guide to the data that is accessible in subfolders.

  6. 78 FR 28835 - Del Ranch Company; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-16

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426, in... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  7. 78 FR 28835 - Patua Project LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-16

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426, in... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  8. 78 FR 75561 - Great Bay Energy V, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-12

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426, in... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  9. 77 FR 64981 - Homer City Generation, L.P.; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-24

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426, in... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  10. 77 FR 69819 - Cirrus Wind 1, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-21

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426, in... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  11. 77 FR 64979 - Great Bay Energy IV, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-24

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426, in... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  12. 78 FR 59923 - Mammoth Three LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-30

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426, in... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  13. 78 FR 61945 - Tuscola Wind II, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-07

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426, in... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  14. 77 FR 69819 - QC Power Strategies Fund LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-21

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426, in... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  15. 78 FR 75561 - Astral Energy LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-12

    ... file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426, in... . To facilitate electronic service, persons with Internet access who will eFile a document and/or be...Registration link. Select the eFiling link to log on and submit the intervention or protests. Persons unable to...

  16. Structure and clay mineralogy: borehole images, log interpretation and sample analyses at Site C0002 Nankai Trough accretionary prism

    NASA Astrophysics Data System (ADS)

    Jurado, Maria Jose; Schleicher, Anja

    2015-04-01

    Our research focused on the characterization of fracture and fault structures from the deep Nankai Trough accretionary prism in Japan. Logging Data and cuttings samples from the two most recent International Ocean Discovery Program (IODP) Expeditions 338 and 348 of the NanTroSEIZE project were analyzed by Logging While Drilling (LWD) oriented images, geophysical logs and clay mineralogy. Both expeditions took place at Site C0002, but whereas Hole C0002F (Expedition 338) was drilled down to 2004.5 mbsf, Hole C0002N and C0002P (Expedition 348) reached a depth of 2325.5 mbsf and 3058.8 mbsf respectively. The structural interpretation of borehole imaging data illustrates the deformation within the fractured and faulted sections of the accretionary prism. All drill holes show distinct areas of intense fracturing and faulting within a very clay-dominated lithology. Here, smectite and illite are the most common clay minerals, but the properties and the role they may play in influencing the fractures, faults and folds in the accretionary prism is still not well understood. When comparing clay mineralogy and fracture/fault areas in hole C0002F (Expedition 338), a trend in the abundance of illite and smectite, and in particular the swelling behavior of smectite is recognizable. In general, the log data provided a good correlation with the actual mineralogy and the relative abundance of clay. Ongoing postcruise preliminary research on hole C0002 N and C0002P (Expedition 348) should confirm these results. The relationship between fracture and fault structures and the changes in clay mineralogy could be explained by the deformation of specific areas with different compaction features, fluid-rock interaction processes, but could also be related to beginning diagenetic processes related to depth. Our results show the integration of logging data and cutting sample analyses as a valuable tool for characterization of petrophysical and mineralogical changes of the structures of the

  17. Beyond Logging of Fingertip Actions: Analysis of Collaborative Learning Using Multiple Sources of Data

    ERIC Educational Resources Information Center

    Avouris, N.; Fiotakis, G.; Kahrimanis, G.; Margaritis, M.; Komis, V.

    2007-01-01

    In this article, we discuss key requirements for collecting behavioural data concerning technology-supported collaborative learning activities. It is argued that the common practice of analysis of computer generated log files of user interactions with software tools is not enough for building a thorough view of the activity. Instead, more…

  18. Generation and use of the Goddard trajectory determination system SLP ephemeris files

    NASA Technical Reports Server (NTRS)

    Armstrong, M. G.; Tomaszewski, I. B.

    1973-01-01

    Information is presented to acquaint users of the Goddard Trajectory Determination System Solar/Lunar/Planetary ephemeris files with the details connected with the generation and use of these files. In particular, certain sections constitute a user's manual for the ephemeris files.

  19. Maintaining a Distributed File System by Collection and Analysis of Metrics

    NASA Technical Reports Server (NTRS)

    Bromberg, Daniel

    1997-01-01

    AFS(originally, Andrew File System) is a widely-deployed distributed file system product used by companies, universities, and laboratories world-wide. However, it is not trivial to operate: runing an AFS cell is a formidable task. It requires a team of dedicated and experienced system administratores who must manage a user base numbring in the thousands, rather than the smaller range of 10 to 500 faced by the typical system administrator.

  20. Log-Concavity and Strong Log-Concavity: a review

    PubMed Central

    Saumard, Adrien; Wellner, Jon A.

    2016-01-01

    We review and formulate results concerning log-concavity and strong-log-concavity in both discrete and continuous settings. We show how preservation of log-concavity and strongly log-concavity on ℝ under convolution follows from a fundamental monotonicity result of Efron (1969). We provide a new proof of Efron's theorem using the recent asymmetric Brascamp-Lieb inequality due to Otto and Menz (2013). Along the way we review connections between log-concavity and other areas of mathematics and statistics, including concentration of measure, log-Sobolev inequalities, convex geometry, MCMC algorithms, Laplace approximations, and machine learning. PMID:27134693

  1. A Next-Generation Parallel File System Environment for the OLCF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dillow, David A; Fuller, Douglas; Gunasekaran, Raghul

    2012-01-01

    When deployed in 2008/2009 the Spider system at the Oak Ridge National Laboratory s Leadership Computing Facility (OLCF) was the world s largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF s diverse computational environment, Spider has since become a blueprint for shared Lustre environments deployed worldwide. Designed to support the parallel I/O requirements of the Jaguar XT5 system and other smallerscale platforms at the OLCF, the upgrade to the Titan XK6 heterogeneous system will begin to push the limits of Spider s originalmore » design by mid 2013. With a doubling in total system memory and a 10x increase in FLOPS, Titan will require both higher bandwidth and larger total capacity. Our goal is to provide a 4x increase in total I/O bandwidth from over 240GB=sec today to 1TB=sec and a doubling in total capacity. While aggregate bandwidth and total capacity remain important capabilities, an equally important goal in our efforts is dramatically increasing metadata performance, currently the Achilles heel of parallel file systems at leadership. We present in this paper an analysis of our current I/O workloads, our operational experiences with the Spider parallel file systems, the high-level design of our Spider upgrade, and our efforts in developing benchmarks that synthesize our performance requirements based on our workload characterization studies.« less

  2. Integration between well logging and seismic reflection techniques for structural a

    NASA Astrophysics Data System (ADS)

    Mohamed, Adel K.; Ghazala, Hosni H.; Mohamed, Lamees

    2016-12-01

    Abu El Gharadig basin is located in the northern part of the Western Desert, Egypt. Geophysical investigation in the form of thirty (3D) seismic lines and well logging data of five wells have been analyzed in the oil field BED-1 that is located in the northwestern part of Abu El Gharadig basin in the Western Desert of Egypt. The reflection sections have been used to shed more light on the tectonic setting of Late Jurassic-Early Cretaceous rocks. While the well logging data have been analyzed for delineating the petrophysical characteristics of the two main reservoirs, Bahariya and Kharita Formations. The constructed subsurface geologic cross sections, seismic sections, and the isochronous reflection maps indicate that the area is structurally controlled by tectonic trends affecting the current shape of Abu El Gharadig basin. Different types of faults are well represented in the area, particularly normal one. The analysis of the average and interval velocities versus depth has shown their effect by facies changes and/or fluid content. On the other hand, the derived petrophysical parameters of Bahariya and Kharita Formations vary from well to another and they have been affected by the gas effect and/or the presence of organic matter, complex lithology, clay content of dispersed habitat, and the pore volume.

  3. Petroleum system modeling of the western Canada sedimentary basin - isopach grid files

    USGS Publications Warehouse

    Higley, Debra K.; Henry, Mitchell E.; Roberts, Laura N.R.

    2005-01-01

    This publication contains zmap-format grid files of isopach intervals that represent strata associated with Devonian to Holocene petroleum systems of the Western Canada Sedimentary Basin (WCSB) of Alberta, British Columbia, and Saskatchewan, Canada. Also included is one grid file that represents elevations relative to sea level of the top of the Lower Cretaceous Mannville Group. Vertical and lateral scales are in meters. The age range represented by the stratigraphic intervals comprising the grid files is 373 million years ago (Ma) to present day. File names, age ranges, formation intervals, and primary petroleum system elements are listed in table 1. Metadata associated with this publication includes information on the study area and the zmap-format files. The digital files listed in table 1 were compiled as part of the Petroleum Processes Research Project being conducted by the Central Energy Resources Team of the U.S. Geological Survey, which focuses on modeling petroleum generation, 3 migration, and accumulation through time for petroleum systems of the WCSB. Primary purposes of the WCSB study are to Construct the 1-D/2-D/3-D petroleum system models of the WCSB. Actual boundaries of the study area are documented within the metadata; excluded are northern Alberta and eastern Saskatchewan, but fringing areas of the United States are included.Publish results of the research and the grid files generated for use in the 3-D model of the WCSB.Evaluate the use of petroleum system modeling in assessing undiscovered oil and gas resources for geologic provinces across the World.

  4. NASIS data base management system: IBM 360 TSS implementation. Volume 6: NASIS message file

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The message file for the NASA Aerospace Safety Information System (NASIS) is discussed. The message file contains all the message and term explanations for the system. The data contained in the file can be broken down into three separate sections: (1) global terms, (2) local terms, and (3) system messages. The various terms are defined and their use within the system is explained.

  5. 78 FR 28834 - Elmore Company; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-16

    ... assumptions of liability. Any person desiring to intervene or to protest should file with the Federal Energy... access who will eFile a document and/or be listed as a contact for an intervenor must create and validate an eRegistration account using the eRegistration link. Select the eFiling link to log on and submit...

  6. 77 FR 64981 - BITHENERGY, Inc.; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-24

    ... assumptions of liability. Any person desiring to intervene or to protest should file with the Federal Energy... Internet access who will eFile a document and/or be listed as a contact for an intervenor must create and validate an eRegistration account using the eRegistration link. Select the eFiling link to log on and...

  7. NASA Uniform Files Index

    NASA Technical Reports Server (NTRS)

    1987-01-01

    This handbook is a guide for the use of all personnel engaged in handling NASA files. It is issued in accordance with the regulations of the National Archives and Records Administration, in the Code of Federal Regulations Title 36, Part 1224, Files Management; and the Federal Information Resources Management Regulation, Subpart 201-45.108, Files Management. It is intended to provide a standardized classification and filing scheme to achieve maximum uniformity and ease in maintaining and using agency records. It is a framework for consistent organization of information in an arrangement that will be useful to current and future researchers. The NASA Uniform Files Index coding structure is composed of the subject classification table used for NASA management directives and the subject groups in the NASA scientific and technical information system. It is designed to correlate files throughout NASA and it is anticipated that it may be useful with automated filing systems. It is expected that in the conversion of current files to this arrangement it will be necessary to add tertiary subjects and make further subdivisions under the existing categories. Established primary and secondary subject categories may not be changed arbitrarily. Proposals for additional subject categories of NASA-wide applicability, and suggestions for improvement in this handbook, should be addressed to the Records Program Manager at the pertinent installation who will forward it to the NASA Records Management Office, Code NTR, for approval. This handbook is issued in loose-leaf form and will be revised by page changes.

  8. Characterization of structures of the Nankai Trough accretionary prism from integrated analyses of LWD log response, resistivity images and clay mineralogy of cuttings: Expedition 338 Site C0002

    NASA Astrophysics Data System (ADS)

    Jurado, Maria Jose; Schleicher, Anja

    2014-05-01

    The objective of our research is a detailed characterization of structures on the basis of LWD oriented images and logs,and clay mineralogy of cuttings from Hole C0002F of the Nankai Trough accretionary prism. Our results show an integrated interpretation of structures derived from borehole images, petrophysical characterization on LWD logs and cuttings mineralogy. The geometry of the structure intersected at Hole C0002F has been characterized by the interpretation of oriented borehole resistivity images acquired during IODP Expedition 338. The characterization of structural features, faults and fracture zones is based on a detailed post-cruise interpretation of bedding and fractures on borehole images and also on the analysis of Logging While Drilling (LWD) log response (gamma radioactivity, resistivity and sonic logs). The interpretation and complete characterization of structures (fractures, fracture zones, fault zones, folds) was achieved after detailed shorebased reprocessing of resistivity images, which allowed to enhance bedding and fracture's imaging for geometry and orientation interpretation. In order to characterize distinctive petrophysical properties based on LWD log response, it could be compared with compositional changes derived from cuttings analyses. Cuttings analyses were used to calibrate and to characterize log response and to verify interpretations in terms of changes in composition and texture at fractures and fault zones defined on borehole images. Cuttings were taken routinely every 5 m during Expedition 338, indicating a clay-dominated lithology of silty claystone with interbeds of weakly consolidated, fine sandstones. The main mineralogical components are clay minerals, quartz, feldspar and calcite. Selected cuttings were taken from areas of interest as defined on LWD logs and images. The clay mineralogy was investigated on the <2 micron clay-size fraction, with special focus on smectite and illite minerals. Based on X-ray diffraction

  9. powerbox: Arbitrarily structured, arbitrary-dimension boxes and log-normal mocks

    NASA Astrophysics Data System (ADS)

    Murray, Steven G.

    2018-05-01

    powerbox creates density grids (or boxes) with an arbitrary two-point distribution (i.e. power spectrum). The software works in any number of dimensions, creates Gaussian or Log-Normal fields, and measures power spectra of output fields to ensure consistency. The primary motivation for creating the code was the simple creation of log-normal mock galaxy distributions, but the methodology can be used for other applications.

  10. An Improved B+ Tree for Flash File Systems

    NASA Astrophysics Data System (ADS)

    Havasi, Ferenc

    Nowadays mobile devices such as mobile phones, mp3 players and PDAs are becoming evermore common. Most of them use flash chips as storage. To store data efficiently on flash, it is necessary to adapt ordinary file systems because they are designed for use on hard disks. Most of the file systems use some kind of search tree to store index information, which is very important from a performance aspect. Here we improved the B+ search tree algorithm so as to make flash devices more efficient. Our implementation of this solution saves 98%-99% of the flash operations, and is now the part of the Linux kernel.

  11. 78 FR 79690 - California Independent System Operator Corporation; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-31

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. EL05-146-008] California Independent System Operator Corporation; Notice of Filing Take notice that on December 20, 2013, the California Independent System Operator Corporation (CAISO) filed a refund report to be made by the CAISO consistent with the Order on Remand (Order)...

  12. Introducing high performance distributed logging service for ACS

    NASA Astrophysics Data System (ADS)

    Avarias, Jorge A.; López, Joao S.; Maureira, Cristián; Sommer, Heiko; Chiozzi, Gianluca

    2010-07-01

    The ALMA Common Software (ACS) is a software framework that provides the infrastructure for the Atacama Large Millimeter Array and other projects. ACS, based on CORBA, offers basic services and common design patterns for distributed software. Every properly built system needs to be able to log status and error information. Logging in a single computer scenario can be as easy as using fprintf statements. However, in a distributed system, it must provide a way to centralize all logging data in a single place without overloading the network nor complicating the applications. ACS provides a complete logging service infrastructure in which every log has an associated priority and timestamp, allowing filtering at different levels of the system (application, service and clients). Currently the ACS logging service uses an implementation of the CORBA Telecom Log Service in a customized way, using only a minimal subset of the features provided by the standard. The most relevant feature used by ACS is the ability to treat the logs as event data that gets distributed over the network in a publisher-subscriber paradigm. For this purpose the CORBA Notification Service, which is resource intensive, is used. On the other hand, the Data Distribution Service (DDS) provides an alternative standard for publisher-subscriber communication for real-time systems, offering better performance and featuring decentralized message processing. The current document describes how the new high performance logging service of ACS has been modeled and developed using DDS, replacing the Telecom Log Service. Benefits and drawbacks are analyzed. A benchmark is presented comparing the differences between the implementations.

  13. A depth video sensor-based life-logging human activity recognition system for elderly care in smart indoor environments.

    PubMed

    Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin

    2014-07-02

    Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.

  14. A Depth Video Sensor-Based Life-Logging Human Activity Recognition System for Elderly Care in Smart Indoor Environments

    PubMed Central

    Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin

    2014-01-01

    Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital. PMID:24991942

  15. 75 FR 27051 - Privacy Act of 1974: System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-13

    ... address and appears below: DOT/FMCSA 004 SYSTEM NAME: National Consumer Complaint Database (NCCDB.... A system, database, and procedures for filing and logging consumer complaints relating to household... are stored in an automated system operated and maintained at the Volpe National Transportation Systems...

  16. The self-adjusting file (SAF) system: An evidence-based update

    PubMed Central

    Metzger, Zvi

    2014-01-01

    Current rotary file systems are effective tools. Nevertheless, they have two main shortcomings: They are unable to effectively clean and shape oval canals and depend too much on the irrigant to do the cleaning, which is an unrealistic illusionThey may jeopardize the long-term survival of the tooth via unnecessary, excessive removal of sound dentin and creation of micro-cracks in the remaining root dentin. The new Self-adjusting File (SAF) technology uses a hollow, compressible NiTi file, with no central metal core, through which a continuous flow of irrigant is provided throughout the procedure. The SAF technology allows for effective cleaning of all root canals including oval canals, thus allowing for the effective disinfection and obturation of all canal morphologies. This technology uses a new concept of cleaning and shaping in which a uniform layer of dentin is removed from around the entire perimeter of the root canal, thus avoiding unnecessary excessive removal of sound dentin. Furthermore, the mode of action used by this file system does not apply the machining of all root canals to a circular bore, as do all other rotary file systems, and does not cause micro-cracks in the remaining root dentin. The new SAF technology allows for a new concept in cleaning and shaping root canals: Minimally Invasive 3D Endodontics. PMID:25298639

  17. The self-adjusting file (SAF) system: An evidence-based update.

    PubMed

    Metzger, Zvi

    2014-09-01

    Current rotary file systems are effective tools. Nevertheless, they have two main shortcomings: They are unable to effectively clean and shape oval canals and depend too much on the irrigant to do the cleaning, which is an unrealistic illusionThey may jeopardize the long-term survival of the tooth via unnecessary, excessive removal of sound dentin and creation of micro-cracks in the remaining root dentin. The new Self-adjusting File (SAF) technology uses a hollow, compressible NiTi file, with no central metal core, through which a continuous flow of irrigant is provided throughout the procedure. The SAF technology allows for effective cleaning of all root canals including oval canals, thus allowing for the effective disinfection and obturation of all canal morphologies. This technology uses a new concept of cleaning and shaping in which a uniform layer of dentin is removed from around the entire perimeter of the root canal, thus avoiding unnecessary excessive removal of sound dentin. Furthermore, the mode of action used by this file system does not apply the machining of all root canals to a circular bore, as do all other rotary file systems, and does not cause micro-cracks in the remaining root dentin. The new SAF technology allows for a new concept in cleaning and shaping root canals: Minimally Invasive 3D Endodontics.

  18. NASA ARCH- A FILE ARCHIVAL SYSTEM FOR THE DEC VAX

    NASA Technical Reports Server (NTRS)

    Scott, P. J.

    1994-01-01

    The function of the NASA ARCH system is to provide a permanent storage area for files that are infrequently accessed. The NASA ARCH routines were designed to provide a simple mechanism by which users can easily store and retrieve files. The user treats NASA ARCH as the interface to a black box where files are stored. There are only five NASA ARCH user commands, even though NASA ARCH employs standard VMS directives and the VAX BACKUP utility. Special care is taken to provide the security needed to insure file integrity over a period of years. The archived files may exist in any of three storage areas: a temporary buffer, the main buffer, and a magnetic tape library. When the main buffer fills up, it is transferred to permanent magnetic tape storage and deleted from disk. Files may be restored from any of the three storage areas. A single file, multiple files, or entire directories can be stored and retrieved. archived entities hold the same name, extension, version number, and VMS file protection scheme as they had in the user's account prior to archival. NASA ARCH is capable of handling up to 7 directory levels. Wildcards are supported. User commands include TEMPCOPY, DISKCOPY, DELETE, RESTORE, and DIRECTORY. The DIRECTORY command searches a directory of savesets covering all three archival areas, listing matches according to area, date, filename, or other criteria supplied by the user. The system manager commands include 1) ARCHIVE- to transfer the main buffer to duplicate magnetic tapes, 2) REPORTto determine when the main buffer is full enough to archive, 3) INCREMENT- to back up the partially filled main buffer, and 4) FULLBACKUP- to back up the entire main buffer. On-line help files are provided for all NASA ARCH commands. NASA ARCH is written in DEC VAX DCL for interactive execution and has been implemented on a DEC VAX computer operating under VMS 4.X. This program was developed in 1985.

  19. VizieR Online Data Catalog: The Gemini Observation Log (CADC, 2001-)

    NASA Astrophysics Data System (ADS)

    Association of Universities For Research in Astronomy

    2018-01-01

    This database contains a log of the Gemini Telescope observations since 2001, managed by the Canadian Astronomical Data Center (CADC). The data are regularly updated (see the date of the last version at the end of this file). The Gemini Observatory consists of twin 8.1-meter diameter optical/infrared telescopes located on two of the best observing sites on the planet. From their locations on mountains in Hawai'i and Chile, Gemini Observatory's telescopes can collectively access the entire sky. Gemini is operated by a partnership of five countries including the United States, Canada, Brazil, Argentina and Chile. Any astronomer in these countries can apply for time on Gemini, which is allocated in proportion to each partner's financial stake. (1 data file).

  20. 78 FR 28834 - Salton Sea Power L.L.C.; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-16

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426, in... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  1. 77 FR 53195 - H.A. Wagner LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-31

    ... link to log on and submit the intervention or protests. Persons unable to file electronically should... file with the Federal Energy Regulatory Commission, 888 First Street NE., Washington, DC 20426, in... service, persons with Internet access who will eFile a document and/or be listed as a contact for an...

  2. Use of On- Board File System: A Real Simplification for the Operators?

    NASA Astrophysics Data System (ADS)

    Olive, X.; Garcia, G.; Alison, B.; Charmeau, M. C.

    2008-08-01

    On-board file system allows to control and operate a spacecraft in new way offering more possibilities. It should permit to provide to the Operator a more abstract data view of the spacecraft, letting them focus on the functional part of their work and not on the exchange mechanism between Ground and Board. Files are usually used in the recent space project but in a restricted way limiting their capabilities. In this paper we describe what we consider as being a file system and its usage on 2 examples among those studied : OBCP and patch. We discuss how files can be handled with the PUS standard and give in the last section some perspectives such as the use of files to standardize all the exchange between Ground / Board and Board / Board.

  3. Design and development of an automatic data acquisition system for a balance study using a smartcard system.

    PubMed

    Ambrozy, C; Kolar, N A; Rattay, F

    2010-01-01

    For measurement value logging of board angle values during balance training, it is necessary to develop a measurement system. This study will provide data for a balance study using the smartcard. The data acquisition comes automatically. An individually training plan for each proband is necessary. To store the proband identification a smartcard with an I2C data bus protocol and an E2PROM memory system is used. For reading the smartcard data a smartcard reader is connected via universal serial bus (USB) to a notebook. The data acquisition and smartcard read programme is designed with Microsoft® Visual C#. A training plan file contains the individual training plan for each proband. The data of the test persons are saved in a proband directory. Each event is automatically saved as a log-file for the exact documentation. This system makes study development easy and time-saving.

  4. Mission Operations Center (MOC) - Precipitation Processing System (PPS) Interface Software System (MPISS)

    NASA Technical Reports Server (NTRS)

    Ferrara, Jeffrey; Calk, William; Atwell, William; Tsui, Tina

    2013-01-01

    MPISS is an automatic file transfer system that implements a combination of standard and mission-unique transfer protocols required by the Global Precipitation Measurement Mission (GPM) Precipitation Processing System (PPS) to control the flow of data between the MOC and the PPS. The primary features of MPISS are file transfers (both with and without PPS specific protocols), logging of file transfer and system events to local files and a standard messaging bus, short term storage of data files to facilitate retransmissions, and generation of file transfer accounting reports. The system includes a graphical user interface (GUI) to control the system, allow manual operations, and to display events in real time. The PPS specific protocols are an enhanced version of those that were developed for the Tropical Rainfall Measuring Mission (TRMM). All file transfers between the MOC and the PPS use the SSH File Transfer Protocol (SFTP). For reports and data files generated within the MOC, no additional protocols are used when transferring files to the PPS. For observatory data files, an additional handshaking protocol of data notices and data receipts is used. MPISS generates and sends to the PPS data notices containing data start and stop times along with a checksum for the file for each observatory data file transmitted. MPISS retrieves the PPS generated data receipts that indicate the success or failure of the PPS to ingest the data file and/or notice. MPISS retransmits the appropriate files as indicated in the receipt when required. MPISS also automatically retrieves files from the PPS. The unique feature of this software is the use of both standard and PPS specific protocols in parallel. The advantage of this capability is that it supports users that require the PPS protocol as well as those that do not require it. The system is highly configurable to accommodate the needs of future users.

  5. Total systems design analysis of high performance structures

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1993-01-01

    Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.

  6. 78 FR 49507 - OriGen Energy LLC ; Supplemental Notice That Initial Market-Based Rate Filing Includes Request...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-14

    ... securities and assumptions of liability. Any person desiring to intervene or to protest should file with the... with Internet access who will eFile a document and/or be listed as a contact for an intervenor must create and validate an eRegistration account using the eRegistration link. Select the eFiling link to log...

  7. 78 FR 49507 - ORNI 47 LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-14

    ... of liability. Any person desiring to intervene or to protest should file with the Federal Energy... access who will eFile a document and/or be listed as a contact for an intervenor must create and validate an eRegistration account using the eRegistration link. Select the eFiling link to log on and submit...

  8. 78 FR 28832 - CalEnergy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-16

    ... assumptions of liability. Any person desiring to intervene or to protest should file with the Federal Energy... access who will eFile a document and/or be listed as a contact for an intervenor must create and validate an eRegistration account using the eRegistration link. Select the eFiling link to log on and submit...

  9. Filing Reprints: A Simple System For The Family Physician

    PubMed Central

    Berner, Mark

    1978-01-01

    This flexible method of filing medical literature without using cards is based on the International Classification of Health Problems in Primary Care. 1Articles, reprints, notes of lectures and rounds, etc. are filed in manilla folders according to a few simple guidelines. This system has proved to be practical and efficient, can be modified for individual needs, and once established requires little time to maintain. PMID:20469289

  10. Well 14-2 Logs and Data: Roosevelt Hot Spring Area, Utah (Utah FORGE)

    DOE Data Explorer

    Joe Moore

    2016-03-03

    This is a compilation of logs and data from Well 14-2 in the Roosevelt Hot Springs area in Utah. This well is also in the Utah FORGE study area. The file is in a compressed .zip format and there is a data inventory table (Excel spreadsheet) in the root folder that is a guide to the data that is accessible in subfolders.

  11. Well 52-21 Logs and Data: Roosevelt Hot Spring Area, Utah (Utah FORGE)

    DOE Data Explorer

    Joe Moore

    2016-03-03

    This is a compilation of logs and data from Well 52-21 in the Roosevelt Hot Springs area in Utah. This well is also in the Utah FORGE study area. The file is in a compressed .zip format and there is a data inventory table (Excel spreadsheet) in the root folder that is a guide to the data that is accessible in subfolders.

  12. Well 82-33 Logs and Data: Roosevelt Hot Spring Area, Utah (Utah FORGE)

    DOE Data Explorer

    Joe Moore

    2016-03-03

    This is a compilation of logs and data from Well 82-33 in the Roosevelt Hot Springs area in Utah. This well is also in the Utah FORGE study area. The file is in a compressed .zip format and there is a data inventory table (Excel spreadsheet) in the root folder that is a guide to the data that is accessible in subfolders.

  13. An analysis technique for testing log grades

    Treesearch

    Carl A. Newport; William G. O' Regan

    1963-01-01

    An analytical technique that may be used in evaluating log-grading systems is described. It also provides means of comparing two or more grading systems, or a proposed change with the system from which it was developed. The total volume and computed value of lumber from each sample log are the basic data used.

  14. 47 CFR 73.1840 - Retention of logs.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... suits upon such claims. (b) Logs may be retained on microfilm, microfiche or other data-storage systems... of logs, stored on data-storage systems, to full-size copies, is required of licensees if requested... converting to a data-storage system pursuant to the requirements of § 73.1800 (c) and (d), (§ 73.1800...

  15. 47 CFR 73.1840 - Retention of logs.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... suits upon such claims. (b) Logs may be retained on microfilm, microfiche or other data-storage systems... of logs, stored on data-storage systems, to full-size copies, is required of licensees if requested... converting to a data-storage system pursuant to the requirements of § 73.1800 (c) and (d), (§ 73.1800...

  16. 47 CFR 73.1840 - Retention of logs.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... suits upon such claims. (b) Logs may be retained on microfilm, microfiche or other data-storage systems... of logs, stored on data-storage systems, to full-size copies, is required of licensees if requested... converting to a data-storage system pursuant to the requirements of § 73.1800 (c) and (d), (§ 73.1800...

  17. 47 CFR 73.1840 - Retention of logs.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... suits upon such claims. (b) Logs may be retained on microfilm, microfiche or other data-storage systems... of logs, stored on data-storage systems, to full-size copies, is required of licensees if requested... converting to a data-storage system pursuant to the requirements of § 73.1800 (c) and (d), (§ 73.1800...

  18. 47 CFR 73.1840 - Retention of logs.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... suits upon such claims. (b) Logs may be retained on microfilm, microfiche or other data-storage systems... of logs, stored on data-storage systems, to full-size copies, is required of licensees if requested... converting to a data-storage system pursuant to the requirements of § 73.1800 (c) and (d), (§ 73.1800...

  19. The European Southern Observatory-MIDAS table file system

    NASA Technical Reports Server (NTRS)

    Peron, M.; Grosbol, P.

    1992-01-01

    The new and substantially upgraded version of the Table File System in MIDAS is presented as a scientific database system. MIDAS applications for performing database operations on tables are discussed, for instance, the exchange of the data to and from the TFS, the selection of objects, the uncertainty joins across tables, and the graphical representation of data. This upgraded version of the TFS is a full implementation of the binary table extension of the FITS format; in addition, it also supports arrays of strings. Different storage strategies for optimal access of very large data sets are implemented and are addressed in detail. As a simple relational database, the TFS may be used for the management of personal data files. This opens the way to intelligent pipeline processing of large amounts of data. One of the key features of the Table File System is to provide also an extensive set of tools for the analysis of the final results of a reduction process. Column operations using standard and special mathematical functions as well as statistical distributions can be carried out; commands for linear regression and model fitting using nonlinear least square methods and user-defined functions are available. Finally, statistical tests of hypothesis and multivariate methods can also operate on tables.

  20. MO-F-CAMPUS-I-01: A System for Automatically Calculating Organ and Effective Dose for Fluoroscopically-Guided Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiong, Z; Vijayan, S; Rana, V

    2015-06-15

    Purpose: A system was developed that automatically calculates the organ and effective dose for individual fluoroscopically-guided procedures using a log of the clinical exposure parameters. Methods: We have previously developed a dose tracking system (DTS) to provide a real-time color-coded 3D- mapping of skin dose. This software produces a log file of all geometry and exposure parameters for every x-ray pulse during a procedure. The data in the log files is input into PCXMC, a Monte Carlo program that calculates organ and effective dose for projections and exposure parameters set by the user. We developed a MATLAB program to readmore » data from the log files produced by the DTS and to automatically generate the definition files in the format used by PCXMC. The processing is done at the end of a procedure after all exposures are completed. Since there are thousands of exposure pulses with various parameters for fluoroscopy, DA and DSA and at various projections, the data for exposures with similar parameters is grouped prior to entry into PCXMC to reduce the number of Monte Carlo calculations that need to be performed. Results: The software developed automatically transfers data from the DTS log file to PCXMC and runs the program for each grouping of exposure pulses. When the dose from all exposure events are calculated, the doses for each organ and all effective doses are summed to obtain procedure totals. For a complicated interventional procedure, the calculations can be completed on a PC without manual intervention in less than 30 minutes depending on the level of data grouping. Conclusion: This system allows organ dose to be calculated for individual procedures for every patient without tedious calculations or data entry so that estimates of stochastic risk can be obtained in addition to the deterministic risk estimate provided by the DTS. Partial support from NIH grant R01EB002873 and Toshiba Medical Systems Corp.« less

  1. Methods and apparatus for multi-resolution replication of files in a parallel computing system using semantic information

    DOEpatents

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Torres, Aaron

    2015-10-20

    Techniques are provided for storing files in a parallel computing system using different resolutions. A method is provided for storing at least one file generated by a distributed application in a parallel computing system. The file comprises one or more of a complete file and a sub-file. The method comprises the steps of obtaining semantic information related to the file; generating a plurality of replicas of the file with different resolutions based on the semantic information; and storing the file and the plurality of replicas of the file in one or more storage nodes of the parallel computing system. The different resolutions comprise, for example, a variable number of bits and/or a different sub-set of data elements from the file. A plurality of the sub-files can be merged to reproduce the file.

  2. TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS

    PubMed Central

    Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.

    2017-01-01

    Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971

  3. 75 FR 76426 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-08

    ..., access control lists, file system permissions, intrusion detection and prevention systems and log..., address, mailing address, country, organization, phone, fax, mobile, pager, Defense Switched Network (DSN..., address, mailing address, country, organization, phone, fax, mobile, pager, Defense Switched Network (DSN...

  4. Temperature increases on the external root surface during endodontic treatment using single file systems.

    PubMed

    Özkocak, I; Taşkan, M M; Gökt Rk, H; Aytac, F; Karaarslan, E Şirin

    2015-01-01

    The aim of this study is to evaluate increases in temperature on the external root surface during endodontic treatment with different rotary systems. Fifty human mandibular incisors with a single root canal were selected. All root canals were instrumented using a size 20 Hedstrom file, and the canals were irrigated with 5% sodium hypochlorite solution. The samples were randomly divided into the following three groups of 15 teeth: Group 1: The OneShape Endodontic File no.: 25; Group 2: The Reciproc Endodontic File no.: 25; Group 3: The WaveOne Endodontic File no.: 25. During the preparation, the temperature changes were measured in the middle third of the roots using a noncontact infrared thermometer. The temperature data were transferred from the thermometer to the computer and were observed graphically. Statistical analysis was performed using the Kruskal-Wallis analysis of variance at a significance level of 0.05. The increases in temperature caused by the OneShape file system were lower than those of the other files (P < 0.05). The WaveOne file showed the highest temperature increases. However, there were no significant differences between the Reciproc and WaveOne files. The single file rotary systems used in this study may be recommended for clinical use.

  5. Effects of selective logging on bat communities in the southeastern Amazon.

    PubMed

    Peters, Sandra L; Malcolm, Jay R; Zimmerman, Barbara L

    2006-10-01

    Although extensive areas of tropical forest are selectively logged each year, the responses of bat communities to this form of disturbance have rarely been examined. Our objectives were to (1) compare bat abundance, species composition, and feeding guild structure between unlogged and low-intensity selectively logged (1-4 logged stems/ha) sampling grids in the southeastern Amazon and (2) examine correlations between logging-induced changes in bat communities and forest structure. We captured bats in understory and canopy mist nets set in five 1-ha study grids in both logged and unlogged forest. We captured 996 individuals, representing 5 families, 32 genera, and 49 species. Abundances of nectarivorous and frugivorous taxa (Glossophaginae, Lonchophyllinae, Stenodermatinae, and Carolliinae) were higher at logged sites, where canopy openness and understory foliage density were greatest. In contrast, insectivorous and omnivorous species (Emballonuridae, Mormoopidae, Phyllostominae, and Vespertilionidae) were more abundant in unlogged sites, where canopy foliage density and variability in the understory stratum were greatest. Multivariate analyses indicated that understory bat species composition differed strongly between logged and unlogged sites but provided little evidence of logging effects for the canopy fauna. Different responses among feeding guilds and taxonomic groups appeared to be related to foraging and echolocation strategies and to changes in canopy cover and understory foliage densities. Our results suggest that even low-intensity logging modifies habitat structure, leading to changes in bat species composition.

  6. NASIS data base management system - IBM 360/370 OS MVT implementation. 6: NASIS message file

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The message file for the NASA Aerospace Safety Information System (NASIS) is discussed. The message file contains all the message and term explanations for the system. The data contained in the file can be broken down into three separate sections: (1) global terms, (2) local terms, and (3) system messages. The various terms are defined and their use within the system is explained.

  7. Mail LOG: Program operating instructions

    NASA Technical Reports Server (NTRS)

    Harris, D. K.

    1979-01-01

    The operating instructions for the software package, MAIL LOG, developed for the Scout Project Automatic Data System, SPADS, are provided. The program is written in FORTRAN for the PRIME 300 computer system. The MAIL LOG program has the following four modes of operation: (1) INPUT - putting new records into the data base (2) REVISE - changing or modifying existing records in the data base (3) SEARCH - finding special records existing in the data base (4) ARCHIVE - store or put away existing records in the data base. The output includes special printouts of records in the data base and results from the INPUT and SEARCH modes. The MAIL LOG data base consists of three main subfiles: Incoming and outgoing mail correspondence; Design Information Releases and Releases and Reports; and Drawings and Engineering orders.

  8. Computer analysis of digital well logs

    USGS Publications Warehouse

    Scott, James H.

    1984-01-01

    A comprehensive system of computer programs has been developed by the U.S. Geological Survey for analyzing digital well logs. The programs are operational on a minicomputer in a research well-logging truck, making it possible to analyze and replot the logs while at the field site. The minicomputer also serves as a controller of digitizers, counters, and recorders during acquisition of well logs. The analytical programs are coordinated with the data acquisition programs in a flexible system that allows the operator to make changes quickly and easily in program variables such as calibration coefficients, measurement units, and plotting scales. The programs are designed to analyze the following well-logging measurements: natural gamma-ray, neutron-neutron, dual-detector density with caliper, magnetic susceptibility, single-point resistance, self potential, resistivity (normal and Wenner configurations), induced polarization, temperature, sonic delta-t, and sonic amplitude. The computer programs are designed to make basic corrections for depth displacements, tool response characteristics, hole diameter, and borehole fluid effects (when applicable). Corrected well-log measurements are output to magnetic tape or plotter with measurement units transformed to petrophysical and chemical units of interest, such as grade of uranium mineralization in percent eU3O8, neutron porosity index in percent, and sonic velocity in kilometers per second.

  9. Analyzing Web Server Logs to Improve a Site's Usage. The Systems Librarian

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2005-01-01

    This column describes ways to streamline and optimize how a Web site works in order to improve both its usability and its visibility. The author explains how to analyze logs and other system data to measure the effectiveness of the Web site design and search engine.

  10. Electrical resistivity well-logging system with solid-state electronic circuitry

    USGS Publications Warehouse

    Scott, James Henry; Farstad, Arnold J.

    1977-01-01

    An improved 4-channel electrical resistivity well-logging system for use with a passive probe with electrodes arranged in the 'normal' configuration has been designed and fabricated by Westinghouse Electric Corporation to meet technical specifications developed by the U.S. Geological Survey. Salient features of the system include solid-state switching and current regulation in the transmitter circuit to produce a constant-current source square wave, and synchronous solid-state switching and sampling of the potential waveform in the receiver circuit to provide an analog dc voltage proportions to the measured resistivity. Technical specifications and design details are included in this report.

  11. Apical extrusion of debris during the preparation of oval root canals: a comparative study between a full-sequence SAF system and a rotary file system supplemented by XP-endo finisher file.

    PubMed

    Kfir, Anda; Moza-Levi, Rotem; Herteanu, Moran; Weissman, Amir; Wigler, Ronald

    2018-03-01

    The purpose of this study was to assess the amount of apically extruded debris during the preparation of oval canals with either a rotary file system supplemented by the XP-endo Finisher file or a full-sequence self-adjusting file (SAF) system. Sixty mandibular incisors were randomly assigned to two groups: group A: stage 1-glide path preparation with Pre-SAF instruments. Stage 2-cleaning and shaping with SAF. Group B: stage 1-glide path preparation with ProGlider file. Stage 2-cleaning and shaping with ProTaper Next system. Stage 3-Final cleaning with XP-endo Finisher file. The debris extruded during each of the stages was collected, and the debris weights were compared between the groups and between the stages within the groups using t tests with a significance level set at P < 0.05. The complete procedure for group B resulted in significantly more extruded debris compared to group A. There was no significant difference between the stages in group A, while there was a significant difference between stage 2 and stages 1 and 3 in group B, but no significant difference between stages 1 and 3. Both instrumentation protocols resulted in extruded debris. Rotary file followed by XP-endo Finisher file extruded significantly more debris than a full-sequence SAF system. Each stage, in either procedure, had its own contribution to the extrusion of debris. Final preparation with XP-endo Finisher file contributes to the total amount of extruded debris, but the clinical relevance of the relative difference in the amount of apically extruded debris remains unclear.

  12. Performance analysis of MIMO wireless optical communication system with Q-ary PPM over correlated log-normal fading channel

    NASA Astrophysics Data System (ADS)

    Wang, Huiqin; Wang, Xue; Lynette, Kibe; Cao, Minghua

    2018-06-01

    The performance of multiple-input multiple-output wireless optical communication systems that adopt Q-ary pulse position modulation over spatial correlated log-normal fading channel is analyzed in terms of its un-coded bit error rate and ergodic channel capacity. The analysis is based on the Wilkinson's method which approximates the distribution of a sum of correlated log-normal random variables to a log-normal random variable. The analytical and simulation results corroborate the increment of correlation coefficients among sub-channels lead to system performance degradation. Moreover, the receiver diversity has better performance in resistance of spatial correlation caused channel fading.

  13. 78 FR 40473 - eBay Inc.; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-05

    ... assumptions of liability. Any person desiring to intervene or to protest should file with the Federal Energy... access who will eFile a document and/or be listed as a contact for an intervenor must create and validate an eRegistration account using the eRegistration link. Select the eFiling link to log on and submit...

  14. Registered File Support for Critical Operations Files at (Space Infrared Telescope Facility) SIRTF

    NASA Technical Reports Server (NTRS)

    Turek, G.; Handley, Tom; Jacobson, J.; Rector, J.

    2001-01-01

    The SIRTF Science Center's (SSC) Science Operations System (SOS) has to contend with nearly one hundred critical operations files via comprehensive file management services. The management is accomplished via the registered file system (otherwise known as TFS) which manages these files in a registered file repository composed of a virtual file system accessible via a TFS server and a file registration database. The TFS server provides controlled, reliable, and secure file transfer and storage by registering all file transactions and meta-data in the file registration database. An API is provided for application programs to communicate with TFS servers and the repository. A command line client implementing this API has been developed as a client tool. This paper describes the architecture, current implementation, but more importantly, the evolution of these services based on evolving community use cases and emerging information system technology.

  15. Self-optimizing Monte Carlo method for nuclear well logging simulation

    NASA Astrophysics Data System (ADS)

    Liu, Lianyan

    1997-09-01

    In order to increase the efficiency of Monte Carlo simulation for nuclear well logging problems, a new method has been developed for variance reduction. With this method, an importance map is generated in the regular Monte Carlo calculation as a by-product, and the importance map is later used to conduct the splitting and Russian roulette for particle population control. By adopting a spatial mesh system, which is independent of physical geometrical configuration, the method allows superior user-friendliness. This new method is incorporated into the general purpose Monte Carlo code MCNP4A through a patch file. Two nuclear well logging problems, a neutron porosity tool and a gamma-ray lithology density tool are used to test the performance of this new method. The calculations are sped up over analog simulation by 120 and 2600 times, for the neutron porosity tool and for the gamma-ray lithology density log, respectively. The new method enjoys better performance by a factor of 4~6 times than that of MCNP's cell-based weight window, as per the converged figure-of-merits. An indirect comparison indicates that the new method also outperforms the AVATAR process for gamma-ray density tool problems. Even though it takes quite some time to generate a reasonable importance map from an analog run, a good initial map can create significant CPU time savings. This makes the method especially suitable for nuclear well logging problems, since one or several reference importance maps are usually available for a given tool. Study shows that the spatial mesh sizes should be chosen according to the mean-free-path. The overhead of the importance map generator is 6% and 14% for neutron and gamma-ray cases. The learning ability towards a correct importance map is also demonstrated. Although false-learning may happen, physical judgement can help diagnose with contributon maps. Calibration and analysis are performed for the neutron tool and the gamma-ray tool. Due to the fact that a very

  16. 12 CFR 27.4 - Inquiry/Application Log.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 1 2011-01-01 2011-01-01 false Inquiry/Application Log. 27.4 Section 27.4... SYSTEM § 27.4 Inquiry/Application Log. (a) The Comptroller, among other things, may require a bank to maintain a Fair Housing Inquiry/Application Log (“Log”), based upon, but not limited to, one or more of the...

  17. 12 CFR 27.4 - Inquiry/Application Log.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Inquiry/Application Log. 27.4 Section 27.4... SYSTEM § 27.4 Inquiry/Application Log. (a) The Comptroller, among other things, may require a bank to maintain a Fair Housing Inquiry/Application Log (“Log”), based upon, but not limited to, one or more of the...

  18. PDB file parser and structure class implemented in Python.

    PubMed

    Hamelryck, Thomas; Manderick, Bernard

    2003-11-22

    The biopython project provides a set of bioinformatics tools implemented in Python. Recently, biopython was extended with a set of modules that deal with macromolecular structure. Biopython now contains a parser for PDB files that makes the atomic information available in an easy-to-use but powerful data structure. The parser and data structure deal with features that are often left out or handled inadequately by other packages, e.g. atom and residue disorder (if point mutants are present in the crystal), anisotropic B factors, multiple models and insertion codes. In addition, the parser performs some sanity checking to detect obvious errors. The Biopython distribution (including source code and documentation) is freely available (under the Biopython license) from http://www.biopython.org

  19. Biological legacies buffer local species extinction after logging

    PubMed Central

    Rudolphi, Jörgen; Jönsson, Mari T; Gustafsson, Lena

    2014-01-01

    Clearcutting has been identified as a main threat to forest biodiversity. In the last few decades, alternatives to clearcutting have gained much interest. Living and dead trees are often retained after harvest to serve as structural legacies to mitigate negative effects of forestry. However, this practice is widely employed without information from systematic before–after control-impact studies to assess the processes involved in species responses after clearcutting with retention. We performed a large-scale survey of the occurrence of logging-sensitive and red-listed bryophytes and lichens before and after clearcutting with the retention approach. A methodology was adopted that, for the first time in studies on retention approaches, enabled monitoring of location-specific substrates. We used uncut stands as controls to assess the variables affecting the survival of species after a major disturbance. In total, 12 bryophyte species and 27 lichen species were analysed. All were classified as sensitive to logging, and most species are also currently red-listed. We found that living and dead trees retained after final harvest acted as refugia in which logging-sensitive species were able to survive for 3 to 7 years after logging. Depending on type of retention and organism group, between 35% and 92% of the species occurrences persisted on retained structures. Most species observed outside retention trees or patches disappeared. Larger pre-harvest population sizes of bryophytes on dead wood increased the survival probability of the species and hence buffered the negative effects of logging. Synthesis and applications. Careful spatial planning of retention structures is required to fully embrace the habitats of logging-sensitive species. Bryophytes and lichens persisted to a higher degree in retention patches compared to solitary trees or in the clearcut area. Retaining groups of trees in logged areas will help to sustain populations of species over the clearcut phase

  20. Biological legacies buffer local species extinction after logging.

    PubMed

    Rudolphi, Jörgen; Jönsson, Mari T; Gustafsson, Lena; Bugmann, H

    2014-02-01

    Clearcutting has been identified as a main threat to forest biodiversity. In the last few decades, alternatives to clearcutting have gained much interest. Living and dead trees are often retained after harvest to serve as structural legacies to mitigate negative effects of forestry. However, this practice is widely employed without information from systematic before-after control-impact studies to assess the processes involved in species responses after clearcutting with retention. We performed a large-scale survey of the occurrence of logging-sensitive and red-listed bryophytes and lichens before and after clearcutting with the retention approach. A methodology was adopted that, for the first time in studies on retention approaches, enabled monitoring of location-specific substrates. We used uncut stands as controls to assess the variables affecting the survival of species after a major disturbance. In total, 12 bryophyte species and 27 lichen species were analysed. All were classified as sensitive to logging, and most species are also currently red-listed. We found that living and dead trees retained after final harvest acted as refugia in which logging-sensitive species were able to survive for 3 to 7 years after logging. Depending on type of retention and organism group, between 35% and 92% of the species occurrences persisted on retained structures. Most species observed outside retention trees or patches disappeared. Larger pre-harvest population sizes of bryophytes on dead wood increased the survival probability of the species and hence buffered the negative effects of logging. Synthesis and applications . Careful spatial planning of retention structures is required to fully embrace the habitats of logging-sensitive species. Bryophytes and lichens persisted to a higher degree in retention patches compared to solitary trees or in the clearcut area. Retaining groups of trees in logged areas will help to sustain populations of species over the clearcut phase

  1. Nickel-Titanium Single-file System in Endodontics.

    PubMed

    Dagna, Alberto

    2015-10-01

    This work describes clinical cases treated with a innovative single-use and single-file nickel-titanium (NiTi) system used in continuous rotation. Nickel-titanium files are commonly used for root canal treatment but they tend to break because of bending stresses and torsional stresses. Today new instruments used only for one treatment have been introduced. They help the clinician to make the root canal shaping easier and safer because they do not require sterilization and after use have to be discarded. A new sterile instrument is used for each treatment in order to reduce the possibility of fracture inside the canal. The new One Shape NiTi single-file instrument belongs to this group. One Shape is used for complete shaping of root canal after an adequate preflaring. Its protocol is simple and some clinical cases are presented. It is helpful for easy cases and reliable for difficult canals. After 2 years of clinical practice, One Shape seems to be helpful for the treatment of most of the root canals, with low risk of separation. After each treatment, the instrument is discarded and not sterilized in autoclave or re-used. This single-use file simplifies the endodontic therapy, because only one instrument is required for canal shaping of many cases. The respect of clinical protocol guarantees predictable good results.

  2. New directions in the CernVM file system

    NASA Astrophysics Data System (ADS)

    Blomer, Jakob; Buncic, Predrag; Ganis, Gerardo; Hardi, Nikola; Meusel, Rene; Popescu, Radu

    2017-10-01

    The CernVM File System today is commonly used to host and distribute application software stacks. In addition to this core task, recent developments expand the scope of the file system into two new areas. Firstly, CernVM-FS emerges as a good match for container engines to distribute the container image contents. Compared to native container image distribution (e.g. through the “Docker registry”), CernVM-FS massively reduces the network traffic for image distribution. This has been shown, for instance, by a prototype integration of CernVM-FS into Mesos developed by Mesosphere, Inc. We present a path for a smooth integration of CernVM-FS and Docker. Secondly, CernVM-FS recently raised new interest as an option for the distribution of experiment conditions data. Here, the focus is on improved versioning capabilities of CernVM-FS that allows to link the conditions data of a run period to the state of a CernVM-FS repository. Lastly, CernVM-FS has been extended to provide a name space for physics data for the LIGO and CMS collaborations. Searching through a data namespace is often done by a central, experiment specific database service. A name space on CernVM-FS can particularly benefit from an existing, scalable infrastructure and from the POSIX file system interface.

  3. Design of housing file box of fire academy based on RFID

    NASA Astrophysics Data System (ADS)

    Li, Huaiyi

    2018-04-01

    This paper presents a design scheme of intelligent file box based on RFID. The advantages of RFID file box and traditional file box are compared and analyzed, and the feasibility of RFID file box design is analyzed based on the actual situation of our university. After introducing the shape and structure design of the intelligent file box, the paper discusses the working process of the file box, and explains in detail the internal communication principle of the RFID file box and the realization of the control system. The application of the RFID based file box will greatly improve the efficiency of our school's archives management.

  4. Prefetching in file systems for MIMD multiprocessors

    NASA Technical Reports Server (NTRS)

    Kotz, David F.; Ellis, Carla Schlatter

    1990-01-01

    The question of whether prefetching blocks on the file into the block cache can effectively reduce overall execution time of a parallel computation, even under favorable assumptions, is considered. Experiments have been conducted with an interleaved file system testbed on the Butterfly Plus multiprocessor. Results of these experiments suggest that (1) the hit ratio, the accepted measure in traditional caching studies, may not be an adequate measure of performance when the workload consists of parallel computations and parallel file access patterns, (2) caching with prefetching can significantly improve the hit ratio and the average time to perform an I/O (input/output) operation, and (3) an improvement in overall execution time has been observed in most cases. In spite of these gains, prefetching sometimes results in increased execution times (a negative result, given the optimistic nature of the study). The authors explore why it is not trivial to translate savings on individual I/O requests into consistently better overall performance and identify the key problems that need to be addressed in order to improve the potential of prefetching techniques in the environment.

  5. Identification of coal seam strata from geophysical logs of borehole using Adaptive Neuro-Fuzzy Inference System

    NASA Astrophysics Data System (ADS)

    Yegireddi, Satyanarayana; Uday Bhaskar, G.

    2009-01-01

    Different parameters obtained through well-logging geophysical sensors such as SP, resistivity, gamma-gamma, neutron, natural gamma and acoustic, help in identification of strata and estimation of the physical, electrical and acoustical properties of the subsurface lithology. Strong and conspicuous changes in some of the log parameters associated with any particular stratigraphy formation, are function of its composition, physical properties and help in classification. However some substrata show moderate values in respective log parameters and make difficult to identify or assess the type of strata, if we go by the standard variability ranges of any log parameters and visual inspection. The complexity increases further with more number of sensors involved. An attempt is made to identify the type of stratigraphy from borehole geophysical log data using a combined approach of neural networks and fuzzy logic, known as Adaptive Neuro-Fuzzy Inference System. A model is built based on a few data sets (geophysical logs) of known stratigraphy of in coal areas of Kothagudem, Godavari basin and further the network model is used as test model to infer the lithology of a borehole from their geophysical logs, not used in simulation. The results are very encouraging and the model is able to decipher even thin cola seams and other strata from borehole geophysical logs. The model can be further modified to assess the physical properties of the strata, if the corresponding ground truth is made available for simulation.

  6. Shaping ability of 4 different single-file systems in simulated S-shaped canals.

    PubMed

    Saleh, Abdulrahman Mohammed; Vakili Gilani, Pouyan; Tavanafar, Saeid; Schäfer, Edgar

    2015-04-01

    The aim of this study was to compare the shaping ability of 4 different single-file systems in simulated S-shaped canals. Sixty-four S-shaped canals in resin blocks were prepared to an apical size of 25 using Reciproc (VDW, Munich, Germany), WaveOne (Dentsply Maillefer, Ballaigues, Switzerland), OneShape (Micro Méga, Besançon, France), and F360 (Komet Brasseler, Lemgo, Germany) (n = 16 canals/group) systems. Composite images were made from the superimposition of pre- and postinstrumentation images. The amount of resin removed by each system was measured by using a digital template and image analysis software. Canal aberrations and the preparation time were also recorded. The data were statistically analyzed by using analysis of variance, Tukey, and chi-square tests. Canals prepared with the F360 and OneShape systems were better centered compared with the Reciproc and WaveOne systems. Reciproc and WaveOne files removed significantly greater amounts of resin from the inner side of both curvatures (P < .05). Instrumentation with OneShape and Reciproc files was significantly faster compared with WaveOne and F360 files (P < .05). No instrument fractured during canal preparation. Under the conditions of this study, all single-file instruments were safe to use and were able to prepare the canals efficiently. However, single-file systems that are less tapered seem to be more favorable when preparing S-shaped canals. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  7. Digital Stratigraphy: Contextual Analysis of File System Traces in Forensic Science.

    PubMed

    Casey, Eoghan

    2017-12-28

    This work introduces novel methods for conducting forensic analysis of file allocation traces, collectively called digital stratigraphy. These in-depth forensic analysis methods can provide insight into the origin, composition, distribution, and time frame of strata within storage media. Using case examples and empirical studies, this paper illuminates the successes, challenges, and limitations of digital stratigraphy. This study also shows how understanding file allocation methods can provide insight into concealment activities and how real-world computer usage can complicate digital stratigraphy. Furthermore, this work explains how forensic analysts have misinterpreted traces of normal file system behavior as indications of concealment activities. This work raises awareness of the value of taking the overall context into account when analyzing file system traces. This work calls for further research in this area and for forensic tools to provide necessary information for such contextual analysis, such as highlighting mass deletion, mass copying, and potential backdating. © 2017 American Academy of Forensic Sciences.

  8. SHOEBOX: A Personal File Handling System for Textual Data. Information System Language Studies, Number 23.

    ERIC Educational Resources Information Center

    Glantz, Richard S.

    Until recently, the emphasis in information storage and retrieval systems has been towards batch-processing of large files. In contrast, SHOEBOX is designed for the unformatted, personal file collection of the computer-naive individual. Operating through display terminals in a time-sharing, interactive environment on the IBM 360, the user can…

  9. Hardwood log grades and lumber grade yields for factory lumber logs

    Treesearch

    Leland F. Hanks; Glenn L. Gammon; Robert L. Brisbin; Everette D. Rast

    1980-01-01

    The USDA Forest Service Standard Grades for Hardwood Factory Lumber Logs are described, and lumber grade yields for 16 species and 2 species groups are presented by log grade and log diameter. The grades enable foresters, log buyers, and log sellers to select and grade those log suitable for conversion into standard factory grade lumber. By using the apropriate lumber...

  10. SU-G-JeP1-08: Dual Modality Verification for Respiratory Gating Using New Real- Time Tumor Tracking Radiotherapy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shiinoki, T; Hanazawa, H; Shibuya, K

    Purpose: The respirato ry gating system combined the TrueBeam and a new real-time tumor-tracking radiotherapy system (RTRT) was installed. The RTRT system consists of two x-ray tubes and color image intensifiers. Using fluoroscopic images, the fiducial marker which was implanted near the tumor was tracked and was used as the internal surrogate for respiratory gating. The purposes of this study was to develop the verification technique of the respiratory gating with the new RTRT using cine electronic portal image device images (EPIDs) of TrueBeam and log files of the RTRT. Methods: A patient who underwent respiratory gated SBRT of themore » lung using the RTRT were enrolled in this study. For a patient, the log files of three-dimensional coordinate of fiducial marker used as an internal surrogate were acquired using the RTRT. Simultaneously, the cine EPIDs were acquired during respiratory gated radiotherapy. The data acquisition was performed for one field at five sessions during the course of SBRT. The residual motion errors were calculated using the log files (E{sub log}). The fiducial marker used as an internal surrogate into the cine EPIDs was automatically extracted by in-house software based on the template-matching algorithm. The differences between the the marker positions of cine EPIDs and digitally reconstructed radiograph were calculated (E{sub EPID}). Results: Marker detection on EPID using in-house software was influenced by low image contrast. For one field during the course of SBRT, the respiratory gating using the RTRT showed the mean ± S.D. of 95{sup th} percentile E{sub EPID} were 1.3 ± 0.3 mm,1.1 ± 0.5 mm,and those of E{sub log} were 1.5 ± 0.2 mm, 1.1 ± 0.2 mm in LR and SI directions, respectively. Conclusion: We have developed the verification method of respiratory gating combined TrueBeam and new real-time tumor-tracking radiotherapy system using EPIDs and log files.« less

  11. 29 CFR 1602.43 - Commission's remedy for school systems' or districts' failure to file report.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...' failure to file report. Any school system or district failing or refusing to file report EEO-5 when... 29 Labor 4 2013-07-01 2013-07-01 false Commission's remedy for school systems' or districts' failure to file report. 1602.43 Section 1602.43 Labor Regulations Relating to Labor (Continued) EQUAL...

  12. 29 CFR 1602.43 - Commission's remedy for school systems' or districts' failure to file report.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...' failure to file report. Any school system or district failing or refusing to file report EEO-5 when... 29 Labor 4 2011-07-01 2011-07-01 false Commission's remedy for school systems' or districts' failure to file report. 1602.43 Section 1602.43 Labor Regulations Relating to Labor (Continued) EQUAL...

  13. 29 CFR 1602.43 - Commission's remedy for school systems' or districts' failure to file report.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...' failure to file report. Any school system or district failing or refusing to file report EEO-5 when... 29 Labor 4 2012-07-01 2012-07-01 false Commission's remedy for school systems' or districts' failure to file report. 1602.43 Section 1602.43 Labor Regulations Relating to Labor (Continued) EQUAL...

  14. 29 CFR 1602.43 - Commission's remedy for school systems' or districts' failure to file report.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...' failure to file report. Any school system or district failing or refusing to file report EEO-5 when... 29 Labor 4 2014-07-01 2014-07-01 false Commission's remedy for school systems' or districts' failure to file report. 1602.43 Section 1602.43 Labor Regulations Relating to Labor (Continued) EQUAL...

  15. 29 CFR 1602.43 - Commission's remedy for school systems' or districts' failure to file report.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...' failure to file report. Any school system or district failing or refusing to file report EEO-5 when... 29 Labor 4 2010-07-01 2010-07-01 false Commission's remedy for school systems' or districts' failure to file report. 1602.43 Section 1602.43 Labor Regulations Relating to Labor (Continued) EQUAL...

  16. An information retrieval system for research file data

    Treesearch

    Joan E. Lengel; John W. Koning

    1978-01-01

    Research file data have been successfully retrieved at the Forest Products Laboratory through a high-speed cross-referencing system involving the computer program FAMULUS as modified by the Madison Academic Computing Center at the University of Wisconsin. The method of data input, transfer to computer storage, system utilization, and effectiveness are discussed....

  17. Assessment of apically extruded debris produced by the self-adjusting file system.

    PubMed

    De-Deus, Gustavo André; Nogueira Leal Silva, Emmanuel João; Moreira, Edson Jorge; de Almeida Neves, Aline; Belladonna, Felipe Gonçalves; Tameirão, Michele

    2014-04-01

    This study was designed to quantitatively evaluate the amount of apically extruded debris by the Self-Adjusting-File system (SAF; ReDent-Nova, Ra'anana, Israel). Hand and rotary instruments were used as references for comparison. Sixty mesial roots of mandibular molars were randomly assigned to 3 groups (n = 20). The root canals were instrumented with hand files using a crown-down technique. The ProTaper (Dentsply Maillefer, Ballaigues, Switzerland) and SAF systems were used according to the manufacturers' instructions. Sodium hypochlorite was used as an irrigant, and the apically extruded debris was collected in preweighted glass vials and dried afterward. The mean weight of debris was assessed with a microbalance and statistically analyzed using 1-way analysis of variance and the post hoc Tukey multiple comparison test. Hand file instrumentation produced significantly more debris compared with the ProTaper and SAF systems (P < .05). The ProTaper system produced significantly more debris compared with the SAF system (P < .05). Under the conditions of this study, all systems caused apical debris extrusion. SAF instrumentation was associated with less debris extrusion compared with the use of hand and rotary files. Copyright © 2014 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  18. STRUTEX: A prototype knowledge-based system for initially configuring a structure to support point loads in two dimensions

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Feyock, Stefan; Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    The purpose of this research effort is to investigate the benefits that might be derived from applying artificial intelligence tools in the area of conceptual design. Therefore, the emphasis is on the artificial intelligence aspects of conceptual design rather than structural and optimization aspects. A prototype knowledge-based system, called STRUTEX, was developed to initially configure a structure to support point loads in two dimensions. This system combines numerical and symbolic processing by the computer with interactive problem solving aided by the vision of the user by integrating a knowledge base interface and inference engine, a data base interface, and graphics while keeping the knowledge base and data base files separate. The system writes a file which can be input into a structural synthesis system, which combines structural analysis and optimization.

  19. 75 FR 52527 - New York Independent System Operator, Inc. Notice of Filings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-26

    ... Compliance Filing, New York Independent System Operator, Inc., 132 FERC 61,031 (July 15, 2010). Any person.... Kimberly D. Bose, Secretary. [FR Doc. 2010-21167 Filed 8-25-10; 8:45 am] BILLING CODE 6717-01-P ...

  20. The impact of logging on biodiversity and carbon sequestration in tropical forests

    NASA Astrophysics Data System (ADS)

    Cazzolla Gatti, R.

    2012-04-01

    Tropical deforestation is one of the most relevant environmental issues at planetary scale. Forest clearcutting has dramatic effect on local biodiversity, on the terrestrial carbon sink and atmospheric GHGs balance. In terms of protection of tropical forests selective logging is, instead, often regarded as a minor or even positive management practice for the ecosystem and it is supported by international certifications. However, few studies are available on changes in the structure, biodiversity and ecosystem services due to the selective logging of African forests. This paper presents the results of a survey on tropical forests of West and Central Africa, with a comparison of long-term dynamics, structure, biodiversity and ecosystem services (such as the carbon sequestration) of different types of forests, from virgin primary to selectively logged and secondary forest. Our study suggests that there is a persistent effect of selective logging on biodiversity and carbon stock losses in the long term (up to 30 years since logging) and after repeated logging. These effects, in terms of species richness and biomass, are greater than the expected losses from commercial harvesting, implying that selective logging in West and Central Africa is impairing long term (at least until 30 years) ecosystem structure and services. A longer selective logging cycle (>30 years) should be considered by logging companies although there is not yet enough information to consider this practice sustainable.

  1. The ALFA (Activity Log Files Aggregation) toolkit: a method for precise observation of the consultation.

    PubMed

    de Lusignan, Simon; Kumarapeli, Pushpa; Chan, Tom; Pflug, Bernhard; van Vlymen, Jeremy; Jones, Beryl; Freeman, George K

    2008-09-08

    There is a lack of tools to evaluate and compare Electronic patient record (EPR) systems to inform a rational choice or development agenda. To develop a tool kit to measure the impact of different EPR system features on the consultation. We first developed a specification to overcome the limitations of existing methods. We divided this into work packages: (1) developing a method to display multichannel video of the consultation; (2) code and measure activities, including computer use and verbal interactions; (3) automate the capture of nonverbal interactions; (4) aggregate multiple observations into a single navigable output; and (5) produce an output interpretable by software developers. We piloted this method by filming live consultations (n = 22) by 4 general practitioners (GPs) using different EPR systems. We compared the time taken and variations during coded data entry, prescribing, and blood pressure (BP) recording. We used nonparametric tests to make statistical comparisons. We contrasted methods of BP recording using Unified Modeling Language (UML) sequence diagrams. We found that 4 channels of video were optimal. We identified an existing application for manual coding of video output. We developed in-house tools for capturing use of keyboard and mouse and to time stamp speech. The transcript is then typed within this time stamp. Although we managed to capture body language using pattern recognition software, we were unable to use this data quantitatively. We loaded these observational outputs into our aggregation tool, which allows simultaneous navigation and viewing of multiple files. This also creates a single exportable file in XML format, which we used to develop UML sequence diagrams. In our pilot, the GP using the EMIS LV (Egton Medical Information Systems Limited, Leeds, UK) system took the longest time to code data (mean 11.5 s, 95% CI 8.7-14.2). Nonparametric comparison of EMIS LV with the other systems showed a significant difference, with EMIS

  2. The ALFA (Activity Log Files Aggregation) Toolkit: A Method for Precise Observation of the Consultation

    PubMed Central

    2008-01-01

    Background There is a lack of tools to evaluate and compare Electronic patient record (EPR) systems to inform a rational choice or development agenda. Objective To develop a tool kit to measure the impact of different EPR system features on the consultation. Methods We first developed a specification to overcome the limitations of existing methods. We divided this into work packages: (1) developing a method to display multichannel video of the consultation; (2) code and measure activities, including computer use and verbal interactions; (3) automate the capture of nonverbal interactions; (4) aggregate multiple observations into a single navigable output; and (5) produce an output interpretable by software developers. We piloted this method by filming live consultations (n = 22) by 4 general practitioners (GPs) using different EPR systems. We compared the time taken and variations during coded data entry, prescribing, and blood pressure (BP) recording. We used nonparametric tests to make statistical comparisons. We contrasted methods of BP recording using Unified Modeling Language (UML) sequence diagrams. Results We found that 4 channels of video were optimal. We identified an existing application for manual coding of video output. We developed in-house tools for capturing use of keyboard and mouse and to time stamp speech. The transcript is then typed within this time stamp. Although we managed to capture body language using pattern recognition software, we were unable to use this data quantitatively. We loaded these observational outputs into our aggregation tool, which allows simultaneous navigation and viewing of multiple files. This also creates a single exportable file in XML format, which we used to develop UML sequence diagrams. In our pilot, the GP using the EMIS LV (Egton Medical Information Systems Limited, Leeds, UK) system took the longest time to code data (mean 11.5 s, 95% CI 8.7-14.2). Nonparametric comparison of EMIS LV with the other systems showed

  3. SwampLog: A Structured Journal for Reflection-in-Action.

    ERIC Educational Resources Information Center

    Nicassio, Frank

    1992-01-01

    Describes "SwampLog," an action-research journal process useful for recording and reflecting upon ongoing experience, exploring and creating innovative approaches to education, and gauging the resultant effects upon organizational, instructional, and individual renewal. (PRA)

  4. 75 FR 69644 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-15

    ..., organization, phone, fax, mobile, pager, Defense Switched Network (DSN) phone, other fax, other mobile, other.../Transport Layer Security (SSL/ TLS) connections, access control lists, file system permissions, intrusion detection and prevention systems and log monitoring. Complete access to all records is restricted to and...

  5. Well log characterization of natural gas-hydrates

    USGS Publications Warehouse

    Collett, Timothy S.; Lee, Myung W.

    2012-01-01

    In the last 25 years there have been significant advancements in the use of well-logging tools to acquire detailed information on the occurrence of gas hydrates in nature: whereas wireline electrical resistivity and acoustic logs were formerly used to identify gas-hydrate occurrences in wells drilled in Arctic permafrost environments, more advanced wireline and logging-while-drilling (LWD) tools are now routinely used to examine the petrophysical nature of gas-hydrate reservoirs and the distribution and concentration of gas hydrates within various complex reservoir systems. Resistivity- and acoustic-logging tools are the most widely used for estimating the gas-hydrate content (i.e., reservoir saturations) in various sediment types and geologic settings. Recent integrated sediment coring and well-log studies have confirmed that electrical-resistivity and acoustic-velocity data can yield accurate gas-hydrate saturations in sediment grain-supported (isotropic) systems such as sand reservoirs, but more advanced log-analysis models are required to characterize gas hydrate in fractured (anisotropic) reservoir systems. New well-logging tools designed to make directionally oriented acoustic and propagation-resistivity log measurements provide the data needed to analyze the acoustic and electrical anisotropic properties of both highly interbedded and fracture-dominated gas-hydrate reservoirs. Advancements in nuclear magnetic resonance (NMR) logging and wireline formation testing (WFT) also allow for the characterization of gas hydrate at the pore scale. Integrated NMR and formation testing studies from northern Canada and Alaska have yielded valuable insight into how gas hydrates are physically distributed in sediments and the occurrence and nature of pore fluids(i.e., free water along with clay- and capillary-bound water) in gas-hydrate-bearing reservoirs. Information on the distribution of gas hydrate at the pore scale has provided invaluable insight on the mechanisms

  6. Rapid estimation of aquifer salinity structure from oil and gas geophysical logs

    NASA Astrophysics Data System (ADS)

    Shimabukuro, D.; Stephens, M.; Ducart, A.; Skinner, S. M.

    2016-12-01

    We describe a workflow for creating aquifer salinity maps using Archie's equation for areas that have geophysical data from oil and gas wells. We apply this method in California, where geophysical logs are available in raster format from the Division of Oil, Gas, and Geothermal Resource (DOGGR) online archive. This method should be applicable to any region where geophysical logs are readily available. Much of the work is controlled by computer code, allowing salinity estimates for new areas to be rapidly generated. For a region of interest, the DOGGR online database is scraped for wells that were logged with multi-tool suites, such as the Platform Express or Triple Combination Logging Tools. Then, well construction metadata, such as measured depth, spud date, and well orientation, is attached. The resultant local database allows a weighted criteria selection of wells that are most likely to have the shallow resistivity, deep resistivity, and density porosity measurements necessary to calculate salinity over the longest depth interval. The algorithm can be adjusted for geophysical log availability for older well fields and density of sampling. Once priority wells are identified, a student researcher team uses Neuralog software to digitize the raster geophysical logs. Total dissolved solid (TDS) concentration is then calculated in clean, wet sand intervals using the resistivity-porosity method, a modified form of Archie's equation. These sand intervals are automatically selected using a combination of spontaneous potential and the difference in shallow resistivity and deep resistivity measurements. Gamma ray logs are not used because arkosic sands common in California make it difficult to distinguish sand and shale. Computer calculation allows easy adjustment of Archie's parameters. The result is a semi-continuous TDS profile for the wells of interest. These profiles are combined and contoured using standard 3-d visualization software to yield preliminary salinity

  7. Financial and Economic Analysis of Reduced Impact Logging

    Treesearch

    Tom Holmes

    2016-01-01

    Concern regarding extensive damage to tropical forests resulting from logging increased dramatically after World War II when mechanized logging systems developed in industrialized countries were deployed in the tropics. As a consequence, tropical foresters began developing logging procedures that were more environmentally benign, and by the 1990s, these practices began...

  8. Avian responses to selective logging shaped by species traits and logging practices

    PubMed Central

    Burivalova, Zuzana; Lee, Tien Ming; Giam, Xingli; Şekercioğlu, Çağan Hakkı; Wilcove, David S.; Koh, Lian Pin

    2015-01-01

    Selective logging is one of the most common forms of forest use in the tropics. Although the effects of selective logging on biodiversity have been widely studied, there is little agreement on the relationship between life-history traits and tolerance to logging. In this study, we assessed how species traits and logging practices combine to determine species responses to selective logging, based on over 4000 observations of the responses of nearly 1000 bird species to selective logging across the tropics. Our analysis shows that species traits, such as feeding group and body mass, and logging practices, such as time since logging and logging intensity, interact to influence a species' response to logging. Frugivores and insectivores were most adversely affected by logging and declined further with increasing logging intensity. Nectarivores and granivores responded positively to selective logging for the first two decades, after which their abundances decrease below pre-logging levels. Larger species of omnivores and granivores responded more positively to selective logging than smaller species from either feeding group, whereas this effect of body size was reversed for carnivores, herbivores, frugivores and insectivores. Most importantly, species most negatively impacted by selective logging had not recovered approximately 40 years after logging cessation. We conclude that selective timber harvest has the potential to cause large and long-lasting changes in avian biodiversity. However, our results suggest that the impacts can be mitigated to a certain extent through specific forest management strategies such as lengthening the rotation cycle and implementing reduced impact logging. PMID:25994673

  9. 20 CFR 401.85 - Exempt systems.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... subsection (k)(2) of the Privacy Act: (A) The General Criminal Investigation Files, SSA; (B) The Criminal Investigations File, SSA; and, (C) The Program Integrity Case Files, SSA. (D) Civil and Administrative Investigative Files of the Inspector General, SSA/OIG. (E) Complaint Files and Log. SSA/OGC. (iii) Pursuant to...

  10. Well Acord 1-26 Logs and Data: Roosevelt Hot Spring Area, Utah (Utah FORGE)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joe Moore

    This is a compilation of logs and data from Well Acord 1-26 in the Roosevelt Hot Springs area in Utah. This well is also in the Utah FORGE study area. The file is in a compressed .zip format and there is a data inventory table (Excel spreadsheet) in the root folder that is a guide to the data that is accessible in subfolders.

  11. Unbinding Transition of Probes in Single-File Systems

    NASA Astrophysics Data System (ADS)

    Bénichou, Olivier; Démery, Vincent; Poncet, Alexis

    2018-02-01

    Single-file transport, arising in quasi-one-dimensional geometries where particles cannot pass each other, is characterized by the anomalous dynamics of a probe, notably its response to an external force. In these systems, the motion of several probes submitted to different external forces, although relevant to mixtures of charged and neutral or active and passive objects, remains unexplored. Here, we determine how several probes respond to external forces. We rely on a hydrodynamic description of the symmetric exclusion process to obtain exact analytical results at long times. We show that the probes can either move as a whole, or separate into two groups moving away from each other. In between the two regimes, they separate with a different dynamical exponent, as t1 /4. This unbinding transition also occurs in several continuous single-file systems and is expected to be observable.

  12. A new high-precision borehole-temperature logging system used at GISP2, Greenland, and Taylor Dome, Antarctica

    USGS Publications Warehouse

    Clow, G.D.; Saltus, R.W.; Waddington, E.D.

    1996-01-01

    We describe a high-precision (0.1-1.0 mK) borehole-temperature (BT) logging system developed at the United States Geological Survey (USGS) for use in remote polar regions. We discuss calibration, operational and data-processing procedures, and present an analysis of the measurement errors. The system is modular to facilitate calibration procedures and field repairs. By interchanging logging cables and temperature sensors, measurements can be made in either shallow air-filled boreholes or liquid-filled holes up to 7 km deep. Data can be acquired in either incremental or continuous-logging modes. The precision of data collected by the new logging system is high enough to detect and quantify various thermal effects at the milli-Kelvin level. To illustrate this capability, we present sample data from the 3 km deep borehole at GISP2, Greenland, and from a 130m deep air-filled hole at Taylor Dome, Antarctica. The precision of the processed GTSP2 continuous temperature logs is 0.25-0.34 mK, while the accuracy is estimated to be 4.5 mK. The effects of fluid convection and the dissipation of the thermal disturbance caused by drilling the borehole are clearly visible in the data. The precision of the incremental Taylor Dome measurements varies from 0.11 to 0.32mK, depending on the wind strength during the experiments. With this precision, we found that temperature fluctuations and multi-hour trends in the BT measurements correlate well with atmospheric-pressure changes.

  13. Ex Vivo Comparison of Mtwo and RaCe Rotary File Systems in Root Canal Deviation: One File Only versus the Conventional Method.

    PubMed

    Aminsobhani, Mohsen; Razmi, Hasan; Nozari, Solmaz

    2015-07-01

    Cleaning and shaping of the root canal system is an important step in endodontic therapy. New instruments incorporate new preparation techniques that can improve the efficacy of cleaning and shaping. The aim of this study was to compare the efficacy of Mtwo and RaCe rotary file systems in straightening the canal curvature using only one file or the conventional method. Sixty mesial roots of extracted human mandibular molars were prepared by RaCe and Mtwo nickel-titanium (NiTi) rotary files using the conventional and only one rotary file methods. The working length was 18 mm and the curvatures of the root canals were between 15-45°. By superimposing x-ray images before and after the instrumentation, deviation of the canals was assessed using Adobe Photoshop CS3 software. Preparation time was recorded. Data were analyzed using three-way ANOVA and Tukey's post hoc test. There were no significant differences between RaCe and Mtwo or between the two root canal preparation methods in root canal deviation in buccolingual and mesiodistal radiographs (P>0.05). Changes of root canal curvature in >35° subgroups were significantly more than in other subgroups with smaller canal curvatures. Preparation time was shorter in one file only technique. According to the results, the two rotary systems and the two root canal preparation methods had equal efficacy in straightening the canals; but the preparation time was shorter in one file only group.

  14. 40 CFR 90.412 - Data logging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Data logging. 90.412 Section 90.412....412 Data logging. (a) A computer or any other automatic data collection (ADC) device(s) may be used as long as the system meets the requirements of this subpart. (b) Determine from the data collection...

  15. 40 CFR 89.409 - Data logging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Data logging. 89.409 Section 89.409... Data logging. (a) A computer or any other automatic data processing device(s) may be used as long as the system meets the requirements of this subpart. (b) Determine from the data collection records the...

  16. 40 CFR 89.409 - Data logging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Data logging. 89.409 Section 89.409... Data logging. (a) A computer or any other automatic data processing device(s) may be used as long as the system meets the requirements of this subpart. (b) Determine from the data collection records the...

  17. 40 CFR 90.412 - Data logging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Data logging. 90.412 Section 90.412....412 Data logging. (a) A computer or any other automatic data collection (ADC) device(s) may be used as long as the system meets the requirements of this subpart. (b) Determine from the data collection...

  18. 40 CFR 89.409 - Data logging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Data logging. 89.409 Section 89.409... Data logging. (a) A computer or any other automatic data processing device(s) may be used as long as the system meets the requirements of this subpart. (b) Determine from the data collection records the...

  19. 40 CFR 90.412 - Data logging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Data logging. 90.412 Section 90.412....412 Data logging. (a) A computer or any other automatic data collection (ADC) device(s) may be used as long as the system meets the requirements of this subpart. (b) Determine from the data collection...

  20. 40 CFR 89.409 - Data logging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Data logging. 89.409 Section 89.409... Data logging. (a) A computer or any other automatic data processing device(s) may be used as long as the system meets the requirements of this subpart. (b) Determine from the data collection records the...

  1. 40 CFR 90.412 - Data logging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Data logging. 90.412 Section 90.412....412 Data logging. (a) A computer or any other automatic data collection (ADC) device(s) may be used as long as the system meets the requirements of this subpart. (b) Determine from the data collection...

  2. 40 CFR 89.409 - Data logging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Data logging. 89.409 Section 89.409... Data logging. (a) A computer or any other automatic data processing device(s) may be used as long as the system meets the requirements of this subpart. (b) Determine from the data collection records the...

  3. 40 CFR 90.412 - Data logging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Data logging. 90.412 Section 90.412....412 Data logging. (a) A computer or any other automatic data collection (ADC) device(s) may be used as long as the system meets the requirements of this subpart. (b) Determine from the data collection...

  4. Design and implementation of encrypted and decrypted file system based on USBKey and hardware code

    NASA Astrophysics Data System (ADS)

    Wu, Kehe; Zhang, Yakun; Cui, Wenchao; Jiang, Ting

    2017-05-01

    To protect the privacy of sensitive data, an encrypted and decrypted file system based on USBKey and hardware code is designed and implemented in this paper. This system uses USBKey and hardware code to authenticate a user. We use random key to encrypt file with symmetric encryption algorithm and USBKey to encrypt random key with asymmetric encryption algorithm. At the same time, we use the MD5 algorithm to calculate the hash of file to verify its integrity. Experiment results show that large files can be encrypted and decrypted in a very short time. The system has high efficiency and ensures the security of documents.

  5. Environmental effects of postfire logging: literature review and annotated bibliography.

    Treesearch

    James D. McIver; Lynn Starr

    2000-01-01

    The scientific literature on logging after wildfire is reviewed, with a focus on environmental effects of logging and removal of large woody structure. Rehabilitation, the practice of planting or seeding after logging, is not reviewed here. Several publications are cited that can be described as “commentaries,” intended to help frame the public debate. We review 21...

  6. 4. Log chicken house (far left foreground), log bunkhouse (far ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. Log chicken house (far left foreground), log bunkhouse (far left background), one-room log cabin (left of center background), log root cellar (center), post-and-beam center in foreground, and blacksmith shop (far right foreground). View to southeast. - William & Lucina Bowe Ranch, County Road 44, 0.1 mile northeast of Big Hole River Bridge, Melrose, Silver Bow County, MT

  7. Sharing lattice QCD data over a widely distributed file system

    NASA Astrophysics Data System (ADS)

    Amagasa, T.; Aoki, S.; Aoki, Y.; Aoyama, T.; Doi, T.; Fukumura, K.; Ishii, N.; Ishikawa, K.-I.; Jitsumoto, H.; Kamano, H.; Konno, Y.; Matsufuru, H.; Mikami, Y.; Miura, K.; Sato, M.; Takeda, S.; Tatebe, O.; Togawa, H.; Ukawa, A.; Ukita, N.; Watanabe, Y.; Yamazaki, T.; Yoshie, T.

    2015-12-01

    JLDG is a data-grid for the lattice QCD (LQCD) community in Japan. Several large research groups in Japan have been working on lattice QCD simulations using supercomputers distributed over distant sites. The JLDG provides such collaborations with an efficient method of data management and sharing. File servers installed on 9 sites are connected to the NII SINET VPN and are bound into a single file system with the GFarm. The file system looks the same from any sites, so that users can do analyses on a supercomputer on a site, using data generated and stored in the JLDG at a different site. We present a brief description of hardware and software of the JLDG, including a recently developed subsystem for cooperating with the HPCI shared storage, and report performance and statistics of the JLDG. As of April 2015, 15 research groups (61 users) store their daily research data of 4.7PB including replica and 68 million files in total. Number of publications for works which used the JLDG is 98. The large number of publications and recent rapid increase of disk usage convince us that the JLDG has grown up into a useful infrastructure for LQCD community in Japan.

  8. 40 CFR 91.412 - Data logging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Data logging. 91.412 Section 91.412... EMISSIONS FROM MARINE SPARK-IGNITION ENGINES Gaseous Exhaust Test Procedures § 91.412 Data logging. (a) A computer or any other automatic data collection (ADC) device(s) may be used as long as the system meets the...

  9. 40 CFR 91.412 - Data logging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Data logging. 91.412 Section 91.412... EMISSIONS FROM MARINE SPARK-IGNITION ENGINES Gaseous Exhaust Test Procedures § 91.412 Data logging. (a) A computer or any other automatic data collection (ADC) device(s) may be used as long as the system meets the...

  10. 40 CFR 91.412 - Data logging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Data logging. 91.412 Section 91.412... EMISSIONS FROM MARINE SPARK-IGNITION ENGINES Gaseous Exhaust Test Procedures § 91.412 Data logging. (a) A computer or any other automatic data collection (ADC) device(s) may be used as long as the system meets the...

  11. 40 CFR 91.412 - Data logging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Data logging. 91.412 Section 91.412... EMISSIONS FROM MARINE SPARK-IGNITION ENGINES Gaseous Exhaust Test Procedures § 91.412 Data logging. (a) A computer or any other automatic data collection (ADC) device(s) may be used as long as the system meets the...

  12. 40 CFR 91.412 - Data logging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Data logging. 91.412 Section 91.412... EMISSIONS FROM MARINE SPARK-IGNITION ENGINES Gaseous Exhaust Test Procedures § 91.412 Data logging. (a) A computer or any other automatic data collection (ADC) device(s) may be used as long as the system meets the...

  13. 76 FR 61956 - Electronic Tariff Filing System (ETFS)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-06

    ...] Electronic Tariff Filing System (ETFS) AGENCY: Federal Communications Commission. ACTION: Final rule; announcement of effective date. SUMMARY: In this document, the Commission announces that the Office of Management and Budget (OMB) has approved, for a period of three years, the information collection associated...

  14. Register file soft error recovery

    DOEpatents

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  15. Compiler-Directed File Layout Optimization for Hierarchical Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut

    File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less

  16. Compiler-Directed File Layout Optimization for Hierarchical Storage Systems

    DOE PAGES

    Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut; ...

    2013-01-01

    File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less

  17. Program Description: Financial Master File Processor-SWRL Financial System.

    ERIC Educational Resources Information Center

    Ideda, Masumi

    Computer routines designed to produce various management and accounting reports required by the Southwest Regional Laboratory's (SWRL) Financial System are described. Input data requirements and output report formats are presented together with a discussion of the Financial Master File updating capabilities of the system. This document should be…

  18. A novel glass slide filing system for pathology slides.

    PubMed

    Tsai, Steve; Kartono, Francisca; Shitabata, Paul K

    2007-07-01

    The availability of a collection of microscope glass slides for review is essential in the study and practice of pathology. A common problem facing many pathologists is the lack of a well-organized filing system. We present a novel system that would be easily accessible, informative, protective, and portable.

  19. Contrast Invariant Interest Point Detection by Zero-Norm LoG Filter.

    PubMed

    Zhenwei Miao; Xudong Jiang; Kim-Hui Yap

    2016-01-01

    The Laplacian of Gaussian (LoG) filter is widely used in interest point detection. However, low-contrast image structures, though stable and significant, are often submerged by the high-contrast ones in the response image of the LoG filter, and hence are difficult to be detected. To solve this problem, we derive a generalized LoG filter, and propose a zero-norm LoG filter. The response of the zero-norm LoG filter is proportional to the weighted number of bright/dark pixels in a local region, which makes this filter be invariant to the image contrast. Based on the zero-norm LoG filter, we develop an interest point detector to extract local structures from images. Compared with the contrast dependent detectors, such as the popular scale invariant feature transform detector, the proposed detector is robust to illumination changes and abrupt variations of images. Experiments on benchmark databases demonstrate the superior performance of the proposed zero-norm LoG detector in terms of the repeatability and matching score of the detected points as well as the image recognition rate under different conditions.

  20. Ex Vivo Comparison of Mtwo and RaCe Rotary File Systems in Root Canal Deviation: One File Only versus the Conventional Method

    PubMed Central

    Aminsobhani, Mohsen; Nozari, Solmaz

    2015-01-01

    Objectives: Cleaning and shaping of the root canal system is an important step in endodontic therapy. New instruments incorporate new preparation techniques that can improve the efficacy of cleaning and shaping. The aim of this study was to compare the efficacy of Mtwo and RaCe rotary file systems in straightening the canal curvature using only one file or the conventional method. Materials and Methods: Sixty mesial roots of extracted human mandibular molars were prepared by RaCe and Mtwo nickel-titanium (NiTi) rotary files using the conventional and only one rotary file methods. The working length was 18 mm and the curvatures of the root canals were between 15–45°. By superimposing x-ray images before and after the instrumentation, deviation of the canals was assessed using Adobe Photoshop CS3 software. Preparation time was recorded. Data were analyzed using three-way ANOVA and Tukey’s post hoc test. Results: There were no significant differences between RaCe and Mtwo or between the two root canal preparation methods in root canal deviation in buccolingual and mesiodistal radiographs (P>0.05). Changes of root canal curvature in >35° subgroups were significantly more than in other subgroups with smaller canal curvatures. Preparation time was shorter in one file only technique. Conclusion: According to the results, the two rotary systems and the two root canal preparation methods had equal efficacy in straightening the canals; but the preparation time was shorter in one file only group. PMID:26877736

  1. Log N-log S in inconclusive

    NASA Technical Reports Server (NTRS)

    Klebesadel, R. W.; Fenimore, E. E.; Laros, J.

    1983-01-01

    The log N-log S data acquired by the Pioneer Venus Orbiter Gamma Burst Detector (PVO) are presented and compared to similar data from the Soviet KONUS experiment. Although the PVO data are consistent with and suggestive of a -3/2 power law distribution, the results are not adequate at this state of observations to differentiate between a -3/2 and a -1 power law slope.

  2. Structure of the top of the Karnak Limestone Member (Ste. Genevieve) in Illinois

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bristol, H.M.; Howard, R.H.

    1976-01-01

    To facilitate petroleum exploration in Illinois, the Illinois State Geological Survey presents a structure map (for most of southern Illinois) of the Karnak Limestone Member--a relatively pure persistent limestone unit (generally 10 to 35 ft thick) in the Ste. Genevieve Limestone of Genevievian age. All available electric logs and selected studies of well cuttings were used in constructing the map. Oil and gas development maps containing Karnak-structure contours are on open file at the ISGS.

  3. Optimal message log reclamation for independent checkpointing

    NASA Technical Reports Server (NTRS)

    Wang, Yi-Min; Fuchs, W. Kent

    1993-01-01

    Independent (uncoordinated) check pointing for parallel and distributed systems allows maximum process autonomy but suffers from possible domino effects and the associated storage space overhead for maintaining multiple checkpoints and message logs. In most research on check pointing and recovery, it was assumed that only the checkpoints and message logs older than the global recovery line can be discarded. It is shown how recovery line transformation and decomposition can be applied to the problem of efficiently identifying all discardable message logs, thereby achieving optimal garbage collection. Communication trace-driven simulation for several parallel programs is used to show the benefits of the proposed algorithm for message log reclamation.

  4. Robust Spatial Autoregressive Modeling for Hardwood Log Inspection

    Treesearch

    Dongping Zhu; A.A. Beex

    1994-01-01

    We explore the application of a stochastic texture modeling method toward a machine vision system for log inspection in the forest products industry. This machine vision system uses computerized tomography (CT) imaging to locate and identify internal defects in hardwood logs. The application of CT to such industrial vision problems requires efficient and robust image...

  5. Bird species and traits associated with logged and unlogged forest in Borneo.

    PubMed

    Cleary, Daniel F R; Boyle, Timothy J B; Setyawati, Titiek; Anggraeni, Celina D; Van Loon, E Emiel; Menken, Steph B J

    2007-06-01

    The ecological consequences of logging have been and remain a focus of considerable debate. In this study, we assessed bird species composition within a logging concession in Central Kalimantan, Indonesian Borneo. Within the study area (approximately 196 km2) a total of 9747 individuals of 177 bird species were recorded. Our goal was to identify associations between species traits and environmental variables. This can help us to understand the causes of disturbance and predict whether species with given traits will persist under changing environmental conditions. Logging, slope position, and a number of habitat structure variables including canopy cover and liana abundance were significantly related to variation in bird composition. In addition to environmental variables, spatial variables also explained a significant amount of variation. However, environmental variables, particularly in relation to logging, were of greater importance in structuring variation in composition. Environmental change following logging appeared to have a pronounced effect on the feeding guild and size class structure but there was little evidence of an effect on restricted range or threatened species although certain threatened species were adversely affected. For example, species such as the terrestrial insectivore Argusianus argus and the hornbill Buceros rhinoceros, both of which are threatened, were rare or absent in recently logged forest. In contrast, undergrowth insectivores such as Orthotomus atrogularis and Trichastoma rostratum were abundant in recently logged forest and rare in unlogged forest. Logging appeared to have the strongest negative effect on hornbills, terrestrial insectivores, and canopy bark-gleaning insectivores while moderately affecting canopy foliage-gleaning insectivores and frugivores, raptors, and large species in general. In contrast, undergrowth insectivores responded positively to logging while most understory guilds showed little pronounced effect

  6. Critical care procedure logging using handheld computers

    PubMed Central

    Carlos Martinez-Motta, J; Walker, Robin; Stewart, Thomas E; Granton, John; Abrahamson, Simon; Lapinsky, Stephen E

    2004-01-01

    Introduction We conducted this study to evaluate the feasibility of implementing an internet-linked handheld computer procedure logging system in a critical care training program. Methods Subspecialty trainees in the Interdepartmental Division of Critical Care at the University of Toronto received and were trained in the use of Palm handheld computers loaded with a customized program for logging critical care procedures. The procedures were entered into the handheld device using checkboxes and drop-down lists, and data were uploaded to a central database via the internet. To evaluate the feasibility of this system, we tracked the utilization of this data collection system. Benefits and disadvantages were assessed through surveys. Results All 11 trainees successfully uploaded data to the central database, but only six (55%) continued to upload data on a regular basis. The most common reason cited for not using the system pertained to initial technical problems with data uploading. From 1 July 2002 to 30 June 2003, a total of 914 procedures were logged. Significant variability was noted in the number of procedures logged by individual trainees (range 13–242). The database generated by regular users provided potentially useful information to the training program director regarding the scope and location of procedural training among the different rotations and hospitals. Conclusion A handheld computer procedure logging system can be effectively used in a critical care training program. However, user acceptance was not uniform, and continued training and support are required to increase user acceptance. Such a procedure database may provide valuable information that may be used to optimize trainees' educational experience and to document clinical training experience for licensing and accreditation. PMID:15469577

  7. Comparison of canal transportation and centering ability of twisted files, Pathfile-ProTaper system, and stainless steel hand K-files by using computed tomography.

    PubMed

    Gergi, Richard; Rjeily, Joe Abou; Sader, Joseph; Naaman, Alfred

    2010-05-01

    The purpose of this study was to compare canal transportation and centering ability of 2 rotary nickel-titanium (NiTi) systems (Twisted Files [TF] and Pathfile-ProTaper [PP]) with conventional stainless steel K-files. Ninety root canals with severe curvature and short radius were selected. Canals were divided randomly into 3 groups of 30 each. After preparation with TF, PP, and stainless steel files, the amount of transportation that occurred was assessed by using computed tomography. Three sections from apical, mid-root, and coronal levels of the canal were recorded. Amount of transportation and centering ability were assessed. The 3 groups were statistically compared with analysis of variance and Tukey honestly significant difference test. Less transportation and better centering ability occurred with TF rotary instruments (P < .0001). K-files showed the highest transportation followed by PP system. PP system showed significant transportation when compared with TF (P < .0001). The TF system was found to be the best for all variables measured in this study. Copyright (c) 2010 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  8. Enhancement/upgrade of Engine Structures Technology Best Estimator (EST/BEST) Software System

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin

    2003-01-01

    This report describes the work performed during the contract period and the capabilities included in the EST/BEST software system. The developed EST/BEST software system includes the integrated NESSUS, IPACS, COBSTRAN, and ALCCA computer codes required to perform the engine cycle mission and component structural analysis. Also, the interactive input generator for NESSUS, IPACS, and COBSTRAN computer codes have been developed and integrated with the EST/BEST software system. The input generator allows the user to create input from scratch as well as edit existing input files interactively. Since it has been integrated with the EST/BEST software system, it enables the user to modify EST/BEST generated files and perform the analysis to evaluate the benefits. Appendix A gives details of how to use the newly added features in the EST/BEST software system.

  9. Biased estimation of forest log characteristics using intersect diameters

    Treesearch

    Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton

    2009-01-01

    Logs are an important structural feature of forest ecosystems, and their abundance affects many resources and forest processes, including fire regimes, soil productivity, silviculture, carbon cycling, and wildlife habitat. Consequently, logs are often sampled to estimate their frequency, percent cover, volume, and weight. The line-intersect method (LIM) is one of the...

  10. The Use of OPAC in a Large Academic Library: A Transactional Log Analysis Study of Subject Searching

    ERIC Educational Resources Information Center

    Villen-Rueda, Luis; Senso, Jose A.; de Moya-Anegon, Felix

    2007-01-01

    The analysis of user searches in catalogs has been the topic of research for over four decades, involving numerous studies and diverse methodologies. The present study looks at how different types of users effect queries in the catalog of a university library. For this purpose, we analyzed log files to determine which was the most frequent type of…

  11. Family Child Care Inventory-Keeper: The Complete Log for Depreciating and Insuring Your Property. Redleaf Business Series.

    ERIC Educational Resources Information Center

    Copeland, Tom

    Figuring depreciation can be the most difficult aspect of filing tax returns for a family child care program. This inventory log for family child care programs is designed to assist in keeping track of the furniture, appliances, and other property used in the child care business; once these items have been identified, they can be deducted as…

  12. Evaluation of the incidence of microcracks caused by Mtwo and ProTaper Next rotary file systems versus the self-adjusting file: A scanning electron microscopic study

    PubMed Central

    Saha, Suparna Ganguly; Vijaywargiya, Neelam; Saxena, Divya; Saha, Mainak Kanti; Bharadwaj, Anuj; Dubey, Sandeep

    2017-01-01

    Introduction: To evaluate the incidence of microcrack formation canal preparation with two rotary nickel–titanium systems Mtwo and ProTaper Next along with the self-adjusting file system. Materials and Methods: One hundred and twenty mandibular premolar teeth were selected. Standardized access cavities were prepared and the canals were manually prepared up to size 20 after coronal preflaring. The teeth were divided into three experimental groups and one control group (n = 30). Group 1: The canals were prepared using Mtwo rotary files. Group 2: The canals were prepared with ProTaper Next files. Group 3: The canals were prepared with self-adjusting files. Group 4: The canals were unprepared and used as a control. The roots were sectioned horizontally 3, 6, and 9 mm from the apex and examined under a scanning electron microscope to check for the presence of microcracks. The Pearson's Chi-square test was applied. Results: The highest incidence of microcracks were associated with the ProTaper Next group, 80% (P = 0.00), followed by the Mtwo group, 70% (P = 0.000), and the least number of microcracks was noted in the self-adjusting file group, 10% (P = 0.068). No significant difference was found between the ProTaper Next and Mtwo groups (P = 0.368) while a significant difference was observed between the ProTaper Next and self-adjusting file groups (P = 0.000) as well as the Mtwo and self-adjusting file groups (P = 0.000). Conclusion: All nickel–titanium rotary instrument systems were associated with microcracks. However, the self-adjusting file system had significantly fewer microcracks when compared with the Mtwo and ProTaper Next. PMID:29386786

  13. Evaluation of the incidence of microcracks caused by Mtwo and ProTaper Next rotary file systems versus the self-adjusting file: A scanning electron microscopic study.

    PubMed

    Saha, Suparna Ganguly; Vijaywargiya, Neelam; Saxena, Divya; Saha, Mainak Kanti; Bharadwaj, Anuj; Dubey, Sandeep

    2017-01-01

    To evaluate the incidence of microcrack formation canal preparation with two rotary nickel-titanium systems Mtwo and ProTaper Next along with the self-adjusting file system. One hundred and twenty mandibular premolar teeth were selected. Standardized access cavities were prepared and the canals were manually prepared up to size 20 after coronal preflaring. The teeth were divided into three experimental groups and one control group ( n = 30). Group 1: The canals were prepared using Mtwo rotary files. Group 2: The canals were prepared with ProTaper Next files. Group 3: The canals were prepared with self-adjusting files. Group 4: The canals were unprepared and used as a control. The roots were sectioned horizontally 3, 6, and 9 mm from the apex and examined under a scanning electron microscope to check for the presence of microcracks. The Pearson's Chi-square test was applied. The highest incidence of microcracks were associated with the ProTaper Next group, 80% ( P = 0.00), followed by the Mtwo group, 70% ( P = 0.000), and the least number of microcracks was noted in the self-adjusting file group, 10% ( P = 0.068). No significant difference was found between the ProTaper Next and Mtwo groups ( P = 0.368) while a significant difference was observed between the ProTaper Next and self-adjusting file groups ( P = 0.000) as well as the Mtwo and self-adjusting file groups ( P = 0.000). All nickel-titanium rotary instrument systems were associated with microcracks. However, the self-adjusting file system had significantly fewer microcracks when compared with the Mtwo and ProTaper Next.

  14. 2. Onroom log cabin (right), log root cellar (center), tworoom ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. On-room log cabin (right), log root cellar (center), two-room log cabin (left), and post-and-beam garage (background). View to southwest. - William & Lucina Bowe Ranch, County Road 44, 0.1 mile northeast of Big Hole River Bridge, Melrose, Silver Bow County, MT

  15. 12 CFR Appendix G to Part 360 - Deposit-Customer Join File Structure

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 5 2014-01-01 2014-01-01 false Deposit-Customer Join File Structure G Appendix G to Part 360 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION REGULATIONS AND STATEMENTS OF GENERAL POLICY RESOLUTION AND RECEIVERSHIP RULES Pt. 360, App. G Appendix G to Part 360—Deposit-Customer...

  16. 12 CFR Appendix G to Part 360 - Deposit-Customer Join File Structure

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 4 2011-01-01 2011-01-01 false Deposit-Customer Join File Structure G Appendix G to Part 360 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION REGULATIONS AND STATEMENTS OF GENERAL POLICY RESOLUTION AND RECEIVERSHIP RULES Pt. 360, App. G Appendix G to Part 360—Deposit-Customer...

  17. 12 CFR Appendix G to Part 360 - Deposit-Customer Join File Structure

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 5 2012-01-01 2012-01-01 false Deposit-Customer Join File Structure G Appendix G to Part 360 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION REGULATIONS AND STATEMENTS OF GENERAL POLICY RESOLUTION AND RECEIVERSHIP RULES Pt. 360, App. G Appendix G to Part 360—Deposit-Customer...

  18. 12 CFR Appendix G to Part 360 - Deposit-Customer Join File Structure

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 5 2013-01-01 2013-01-01 false Deposit-Customer Join File Structure G Appendix G to Part 360 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION REGULATIONS AND STATEMENTS OF GENERAL POLICY RESOLUTION AND RECEIVERSHIP RULES Pt. 360, App. G Appendix G to Part 360—Deposit-Customer...

  19. 12 CFR Appendix G to Part 360 - Deposit-Customer Join File Structure

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..._Code Relationship CodeThe code indicating how the customer is related to the account. Possible values... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Deposit-Customer Join File Structure G Appendix... GENERAL POLICY RESOLUTION AND RECEIVERSHIP RULES Pt. 360, App. G Appendix G to Part 360—Deposit-Customer...

  20. Demographic Profile of U.S. Children: National File [Machine-Readable Data File].

    ERIC Educational Resources Information Center

    Peterson, J. L.; White, R. N.

    These two computer files contain social and demographic data about U.S. children and their families taken from the March 1985 Current Population Survey of the U.S. Census. One file is for all children; the second file is for black children. The following column variables are included: (1) family structure; (2) parent educational attainment; (3)…

  1. Postfire logging in riparian areas.

    PubMed

    Reeves, Gordon H; Bisson, Peter A; Rieman, Bruce E; Benda, Lee E

    2006-08-01

    We reviewed the behavior of wildfire in riparian zones, primarily in the western United States, and the potential ecological consequences of postfire logging. Fire behavior in riparian zones is complex, but many aquatic and riparian organisms exhibit a suite of adaptations that allow relatively rapid recovery after fire. Unless constrained by other factors, fish tend to rebound relatively quickly, usually within a decade after a wildfire. Additionally, fire and subsequent erosion events contribute wood and coarse sediment that can create and maintain productive aquatic habitats over time. The potential effects of postfire logging in riparian areas depend on the landscape context and disturbance history of a site; however available evidence suggests two key management implications: (1) fire in riparian areas creates conditions that may not require intervention to sustain the long-term productivity of the aquatic network and (2) protection of burned riparian areas gives priority to what is left rather than what is removed. Research is needed to determine how postfire logging in riparian areas has affected the spread of invasive species and the vulnerability of upland forests to insect and disease outbreaks and how postfire logging will affect the frequency and behavior of future fires. The effectiveness of using postfire logging to restore desired riparian structure and function is therefore unproven, but such projects are gaining interest with the departure of forest conditions from those that existed prior to timber harvest, fire suppression, and climate change. In the absence of reliable information about the potential consequence of postfire timber harvest, we conclude that providing postfire riparian zones with the same environmental protections they received before they burned isjustified ecologically Without a commitment to monitor management experiments, the effects of postfire riparian logging will remain unknown and highly contentious.

  2. The RIACS Intelligent Auditing and Categorizing System

    NASA Technical Reports Server (NTRS)

    Bishop, Matt

    1988-01-01

    The organization of the RIACS auditing package is described along with how to installation instructions and how to interpret the output. How to set up both local and remote file system auditing is given. Logging is done on a time driven basis, and auditing in a passive mode.

  3. Lidar-Incorporated Traffic Sign Detection from Video Log Images of Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Li, Y.; Fan, J.; Huang, Y.; Chen, Z.

    2016-06-01

    Mobile Mapping System (MMS) simultaneously collects the Lidar points and video log images in a scenario with the laser profiler and digital camera. Besides the textural details of video log images, it also captures the 3D geometric shape of point cloud. It is widely used to survey the street view and roadside transportation infrastructure, such as traffic sign, guardrail, etc., in many transportation agencies. Although many literature on traffic sign detection are available, they only focus on either Lidar or imagery data of traffic sign. Based on the well-calibrated extrinsic parameters of MMS, 3D Lidar points are, the first time, incorporated into 2D video log images to enhance the detection of traffic sign both physically and visually. Based on the local elevation, the 3D pavement area is first located. Within a certain distance and height of the pavement, points of the overhead and roadside traffic signs can be obtained according to the setup specification of traffic signs in different transportation agencies. The 3D candidate planes of traffic signs are then fitted using the RANSAC plane-fitting of those points. By projecting the candidate planes onto the image, Regions of Interest (ROIs) of traffic signs are found physically with the geometric constraints between laser profiling and camera imaging. The Random forest learning of the visual color and shape features of traffic signs is adopted to validate the sign ROIs from the video log images. The sequential occurrence of a traffic sign among consecutive video log images are defined by the geometric constraint of the imaging geometry and GPS movement. Candidate ROIs are predicted in this temporal context to double-check the salient traffic sign among video log images. The proposed algorithm is tested on a diverse set of scenarios on the interstate highway G-4 near Beijing, China under varying lighting conditions and occlusions. Experimental results show the proposed algorithm enhances the rate of detecting

  4. Quantitative evaluation of apically extruded debris with different single-file systems: Reciproc, F360 and OneShape versus Mtwo.

    PubMed

    Bürklein, S; Benten, S; Schäfer, E

    2014-05-01

    To assess in a laboratory setting the amount of apically extruded debris associated with different single-file nickel-titanium instrumentation systems compared to one multiple-file rotary system. Eighty human mandibular central incisors were randomly assigned to four groups (n = 20 teeth per group). The root canals were instrumented according to the manufacturers' instructions using the reciprocating single-file system Reciproc, the single-file rotary systems F360 and OneShape and the multiple-file rotary Mtwo instruments. The apically extruded debris was collected and dried in pre-weighed glass vials. The amount of debris was assessed with a micro balance and statistically analysed using anova and post hoc Student-Newman-Keuls test. The time required to prepare the canals with the different instruments was also recorded. Reciproc produced significantly more debris compared to all other systems (P < 0.05). No significant difference was noted between the two single-file rotary systems and the multiple-file rotary system (P > 0.05). Instrumentation with the three single-file systems was significantly faster than with Mtwo (P < 0.05). Under the condition of this study, all systems caused apical debris extrusion. Rotary instrumentation was associated with less debris extrusion compared to reciprocal instrumentation. © 2013 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  5. SIDS-toADF File Mapping Manual

    NASA Technical Reports Server (NTRS)

    McCarthy, Douglas; Smith, Matthew; Poirier, Diane; Smith, Charles A. (Technical Monitor)

    2002-01-01

    The "CFD General Notation System" (CGNS) consists of a collection of conventions, and conforming software, for the storage and retrieval of Computational Fluid Dynamics (CFD) data. It facilitates the exchange of data between sites and applications, and helps stabilize the archiving of aerodynamic data. This effort was initiated in order to streamline the procedures in exchanging data and software between NASA and its customers, but the goal is to develop CGNS into a National Standard for the exchange of aerodynamic data. The CGNS development team is comprised of members from Boeing Commercial Airplane Group, NASA-Ames, NASA-Langley, NASA-Lewis, McDonnell-Douglas Corporation (now Boeing-St. Louis), Air Force-Wright Lab., and ICEM-CFD Engineering. The elements of CGNS address all activities associated with the storage of data on external media and its movement to and from application programs. These elements include: 1) The Advanced Data Format (ADF) Database manager, consisting of both a file format specification and its I/O software, which handles the actual reading and writing of data from and to external storage media; 2) The Standard Interface Data Structures (SIDS), which specify the intellectual content of CFD data and the conventions governing naming and terminology; 3) The SIDS-to-ADF File Mapping conventions, which specify the exact location where the CFD data defined by the SIDS is to be stored within the ADF file(s); and 4) The CGNS Mid-level Library, which provides CFD-knowledgeable routines suitable for direct installation into application codes. The SIDS-toADF File Mapping Manual specifies the exact manner in which, under CGNS conventions, CFD data structures (the SIDS) are to be stored in (i.e., mapped onto) the file structure provided by the database manager (ADF). The result is a conforming CGNS database. Adherence to the mapping conventions guarantees uniform meaning and location of CFD data within ADF files, and thereby allows the construction of

  6. Using the K-25 C TD Common File System: A guide to CFSI (CFS Interface)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1989-12-01

    A CFS (Common File System) is a large, centralized file management and storage facility based on software developed at Los Alamos National Laboratory. This manual is a guide to use of the CFS available to users of the Cray UNICOS system at Martin Marietta Energy Systems, Inc., in Oak Ridge, Tennessee.

  7. Tropical forests are thermally buffered despite intensive selective logging.

    PubMed

    Senior, Rebecca A; Hill, Jane K; Benedick, Suzan; Edwards, David P

    2018-03-01

    Tropical rainforests are subject to extensive degradation by commercial selective logging. Despite pervasive changes to forest structure, selectively logged forests represent vital refugia for global biodiversity. The ability of these forests to buffer temperature-sensitive species from climate warming will be an important determinant of their future conservation value, although this topic remains largely unexplored. Thermal buffering potential is broadly determined by: (i) the difference between the "macroclimate" (climate at a local scale, m to ha) and the "microclimate" (climate at a fine-scale, mm to m, that is distinct from the macroclimate); (ii) thermal stability of microclimates (e.g. variation in daily temperatures); and (iii) the availability of microclimates to organisms. We compared these metrics in undisturbed primary forest and intensively logged forest on Borneo, using thermal images to capture cool microclimates on the surface of the forest floor, and information from dataloggers placed inside deadwood, tree holes and leaf litter. Although major differences in forest structure remained 9-12 years after repeated selective logging, we found that logging activity had very little effect on thermal buffering, in terms of macroclimate and microclimate temperatures, and the overall availability of microclimates. For 1°C warming in the macroclimate, temperature inside deadwood, tree holes and leaf litter warmed slightly more in primary forest than in logged forest, but the effect amounted to <0.1°C difference between forest types. We therefore conclude that selectively logged forests are similar to primary forests in their potential for thermal buffering, and subsequent ability to retain temperature-sensitive species under climate change. Selectively logged forests can play a crucial role in the long-term maintenance of global biodiversity. © 2017 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.

  8. Modeling and validating the grabbing forces of hydraulic log grapples used in forest operations

    Treesearch

    Jingxin Wang; Chris B. LeDoux; Lihai Wang

    2003-01-01

    The grabbing forces of log grapples were modeled and analyzed mathematically under operating conditions when grabbing logs from compact log piles and from bunch-like log piles. The grabbing forces are closely related to the structural parameters of the grapple, the weight of the grapple, and the weight of the log grabbed. An operational model grapple was designed and...

  9. Permanent-File-Validation Utility Computer Program

    NASA Technical Reports Server (NTRS)

    Derry, Stephen D.

    1988-01-01

    Errors in files detected and corrected during operation. Permanent File Validation (PFVAL) utility computer program provides CDC CYBER NOS sites with mechanism to verify integrity of permanent file base. Locates and identifies permanent file errors in Mass Storage Table (MST) and Track Reservation Table (TRT), in permanent file catalog entries (PFC's) in permit sectors, and in disk sector linkage. All detected errors written to listing file and system and job day files. Program operates by reading system tables , catalog track, permit sectors, and disk linkage bytes to vaidate expected and actual file linkages. Used extensively to identify and locate errors in permanent files and enable online correction, reducing computer-system downtime.

  10. Log-Based Recovery in Asynchronous Distributed Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Kane, Kenneth Paul

    1989-01-01

    A log-based mechanism is described for restoring consistent states to replicated data objects after failures. Preserving a causal form of consistency based on the notion of virtual time is focused upon in this report. Causal consistency has been shown to apply to a variety of applications, including distributed simulation, task decomposition, and mail delivery systems. Several mechanisms have been proposed for implementing causally consistent recovery, most notably those of Strom and Yemini, and Johnson and Zwaenepoel. The mechanism proposed here differs from these in two major respects. First, a roll-forward style of recovery is implemented. A functioning process is never required to roll-back its state in order to achieve consistency with a recovering process. Second, the mechanism does not require any explicit information about the causal dependencies between updates. Instead, all necessary dependency information is inferred from the orders in which updates are logged by the object servers. This basic recovery technique appears to be applicable to forms of consistency other than causal consistency. In particular, it is shown how the recovery technique can be modified to support an atomic form of consistency (grouping consistency). By combining grouping consistency with casual consistency, it may even be possible to implement serializable consistency within this mechanism.

  11. An implementation of the programming structural synthesis system (PROSSS)

    NASA Technical Reports Server (NTRS)

    Rogers, J. L., Jr.; Sobieszczanski-Sobieski, J.; Bhat, R. B.

    1981-01-01

    A particular implementation of the programming structural synthesis system (PROSSS) is described. This software system combines a state of the art optimization program, a production level structural analysis program, and user supplied, problem dependent interface programs. These programs are combined using standard command language features existing in modern computer operating systems. PROSSS is explained in general with respect to this implementation along with the steps for the preparation of the programs and input data. Each component of the system is described in detail with annotated listings for clarification. The components include options, procedures, programs and subroutines, and data files as they pertain to this implementation. An example exercising each option in this implementation to allow the user to anticipate the type of results that might be expected is presented.

  12. Development of a user-friendly system for image processing of electron microscopy by integrating a web browser and PIONE with Eos.

    PubMed

    Tsukamoto, Takafumi; Yasunaga, Takuo

    2014-11-01

    need to provide common workspace for analysis because the client is physically separated from a server. We solved the file format problem by extension of rules of OptionControlFile of Eos. Furthermore, to solve workspace problems, we have developed two type of system. The first system is to use only local environments. The user runs a web server provided by Eos, access to a web client through a web browser, and manipulate the local files with GUI on the web browser. The second system is employing PIONE (Process-rule for Input/Output Negotiation Environment), which is our developing platform that works under heterogenic distributed environment. The users can put their resources, such as microscopic images, text files and so on, into the server-side environment supported by PIONE, and so experts can write PIONE rule definition, which defines a workflow of image processing. PIONE run each image processing on suitable computers, following the defined rule. PIONE has the ability of interactive manipulation, and user is able to try a command with various setting values. In this situation, we contribute to auto-generation of GUI for a PIONE workflow.As advanced functions, we have developed a module to log user actions. The logs include information such as setting values in image processing, procedure of commands and so on. If we use the logs effectively, we can get a lot of advantages. For example, when an expert may discover some know-how of image processing, other users can also share logs including his know-hows and so we may obtain recommendation workflow of image analysis, if we analyze logs. To implement social platform of image processing for electron microscopists, we have developed system infrastructure, as well. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. A convertor and user interface to import CAD files into worldtoolkit virtual reality systems

    NASA Technical Reports Server (NTRS)

    Wang, Peter Hor-Ching

    1996-01-01

    Virtual Reality (VR) is a rapidly developing human-to-computer interface technology. VR can be considered as a three-dimensional computer-generated Virtual World (VW) which can sense particular aspects of a user's behavior, allow the user to manipulate the objects interactively, and render the VW at real-time accordingly. The user is totally immersed in the virtual world and feel the sense of transforming into that VW. NASA/MSFC Computer Application Virtual Environments (CAVE) has been developing the space-related VR applications since 1990. The VR systems in CAVE lab are based on VPL RB2 system which consists of a VPL RB2 control tower, an LX eyephone, an Isotrak polhemus sensor, two Fastrak polhemus sensors, a folk of Bird sensor, and two VPL DG2 DataGloves. A dynamics animator called Body Electric from VPL is used as the control system to interface with all the input/output devices and to provide the network communications as well as VR programming environment. The RB2 Swivel 3D is used as the modelling program to construct the VW's. A severe limitation of the VPL VR system is the use of RB2 Swivel 3D, which restricts the files to a maximum of 1020 objects and doesn't have the advanced graphics texture mapping. The other limitation is that the VPL VR system is a turn-key system which does not provide the flexibility for user to add new sensors and C language interface. Recently, NASA/MSFC CAVE lab provides VR systems built on Sense8 WorldToolKit (WTK) which is a C library for creating VR development environments. WTK provides device drivers for most of the sensors and eyephones available on the VR market. WTK accepts several CAD file formats, such as Sense8 Neutral File Format, AutoCAD DXF and 3D Studio file format, Wave Front OBJ file format, VideoScape GEO file format, Intergraph EMS stereolithographics and CATIA Stereolithographics STL file formats. WTK functions are object-oriented in their naming convention, are grouped into classes, and provide easy C

  14. Stochastic theory of log-periodic patterns

    NASA Astrophysics Data System (ADS)

    Canessa, Enrique

    2000-12-01

    We introduce an analytical model based on birth-death clustering processes to help in understanding the empirical log-periodic corrections to power law scaling and the finite-time singularity as reported in several domains including rupture, earthquakes, world population and financial systems. In our stochastic theory log-periodicities are a consequence of transient clusters induced by an entropy-like term that may reflect the amount of co-operative information carried by the state of a large system of different species. The clustering completion rates for the system are assumed to be given by a simple linear death process. The singularity at t0 is derived in terms of birth-death clustering coefficients.

  15. Salvage logging, ecosystem processes, and biodiversity conservation.

    PubMed

    Lindenmayer, D B; Noss, R F

    2006-08-01

    We summarize the documented and potential impacts of salvage logging--a form of logging that removes trees and other biological material from sites after natural disturbance. Such operations may reduce or eliminate biological legacies, modify rare postdisturbance habitats, influence populations, alter community composition, impair natural vegetation recovery, facilitate the colonization of invasive species, alter soil properties and nutrient levels, increase erosion, modify hydrological regimes and aquatic ecosystems, and alter patterns of landscape heterogeneity These impacts can be assigned to three broad and interrelated effects: (1) altered stand structural complexity; (2) altered ecosystem processes and functions; and (3) altered populations of species and community composition. Some impacts may be different from or additional to the effects of traditional logging that is not preceded by a large natural disturbance because the conditions before, during, and after salvage logging may differ from those that characterize traditional timber harvesting. The potential impacts of salvage logging often have been overlooked, partly because the processes of ecosystem recovery after natural disturbance are still poorly understood and partly because potential cumulative effects of natural and human disturbance have not been well documented. Ecologically informed policies regarding salvage logging are needed prior to major natural disturbances so that when they occur ad hoc and crisis-mode decision making can be avoided. These policies should lead to salvage-exemption zones and limits on the amounts of disturbance-derived biological legacies (e.g., burned trees, logs) that are removed where salvage logging takes place. Finally, we believe new terminology is needed. The word salvage implies that something is being saved or recovered, whereas from an ecological perspective this is rarely the case.

  16. Nonblocking and orphan free message logging protocols

    NASA Technical Reports Server (NTRS)

    Alvisi, Lorenzo; Hoppe, Bruce; Marzullo, Keith

    1992-01-01

    Currently existing message logging protocols demonstrate a classic pessimistic vs. optimistic tradeoff. We show that the optimistic-pessimistic tradeoff is not inherent to the problem of message logging. We construct a message-logging protocol that has the positive features of both optimistic and pessimistic protocol: our protocol prevents orphans and allows simple failure recovery; however, it requires no blocking in failure-free runs. Furthermore, this protocol does not introduce any additional message overhead as compared to one implemented for a system in which messages may be lost but processes do not crash.

  17. Nonblocking and orphan free message logging protocols

    NASA Astrophysics Data System (ADS)

    Alvisi, Lorenzo; Hoppe, Bruce; Marzullo, Keith

    1992-12-01

    Currently existing message logging protocols demonstrate a classic pessimistic vs. optimistic tradeoff. We show that the optimistic-pessimistic tradeoff is not inherent to the problem of message logging. We construct a message-logging protocol that has the positive features of both optimistic and pessimistic protocol: our protocol prevents orphans and allows simple failure recovery; however, it requires no blocking in failure-free runs. Furthermore, this protocol does not introduce any additional message overhead as compared to one implemented for a system in which messages may be lost but processes do not crash.

  18. Addressing fluorogenic real-time qPCR inhibition using the novel custom Excel file system 'FocusField2-6GallupqPCRSet-upTool-001' to attain consistently high fidelity qPCR reactions

    PubMed Central

    Ackermann, Mark R.

    2006-01-01

    The purpose of this manuscript is to discuss fluorogenic real-time quantitative polymerase chain reaction (qPCR) inhibition and to introduce/define a novel Microsoft Excel-based file system which provides a way to detect and avoid inhibition, and enables investigators to consistently design dynamically-sound, truly LOG-linear qPCR reactions very quickly. The qPCR problems this invention solves are universal to all qPCR reactions, and it performs all necessary qPCR set-up calculations in about 52 seconds (using a pentium 4 processor) for up to seven qPCR targets and seventy-two samples at a time – calculations that commonly take capable investigators days to finish. We have named this custom Excel-based file system "FocusField2-6GallupqPCRSet-upTool-001" (FF2-6-001 qPCR set-up tool), and are in the process of transforming it into professional qPCR set-up software to be made available in 2007. The current prototype is already fully functional. PMID:17033699

  19. Determining geophysical properties from well log data using artificial neural networks and fuzzy inference systems

    NASA Astrophysics Data System (ADS)

    Chang, Hsien-Cheng

    Two novel synergistic systems consisting of artificial neural networks and fuzzy inference systems are developed to determine geophysical properties by using well log data. These systems are employed to improve the determination accuracy in carbonate rocks, which are generally more complex than siliciclastic rocks. One system, consisting of a single adaptive resonance theory (ART) neural network and three fuzzy inference systems (FISs), is used to determine the permeability category. The other system, which is composed of three ART neural networks and a single FIS, is employed to determine the lithofacies. The geophysical properties studied in this research, permeability category and lithofacies, are treated as categorical data. The permeability values are transformed into a "permeability category" to account for the effects of scale differences between core analyses and well logs, and heterogeneity in the carbonate rocks. The ART neural networks dynamically cluster the input data sets into different groups. The FIS is used to incorporate geologic experts' knowledge, which is usually in linguistic forms, into systems. These synergistic systems thus provide viable alternative solutions to overcome the effects of heterogeneity, the uncertainties of carbonate rock depositional environments, and the scarcity of well log data. The results obtained in this research show promising improvements over backpropagation neural networks. For the permeability category, the prediction accuracies are 68.4% and 62.8% for the multiple-single ART neural network-FIS and a single backpropagation neural network, respectively. For lithofacies, the prediction accuracies are 87.6%, 79%, and 62.8% for the single-multiple ART neural network-FIS, a single ART neural network, and a single backpropagation neural network, respectively. The sensitivity analysis results show that the multiple-single ART neural networks-FIS and a single ART neural network possess the same matching trends in

  20. Integration of LDSE and LTVS logs with HIPAA compliant auditing system (HCAS)

    NASA Astrophysics Data System (ADS)

    Zhou, Zheng; Liu, Brent J.; Huang, H. K.; Guo, Bing; Documet, Jorge; King, Nelson

    2006-03-01

    The deadline of HIPAA (Health Insurance Portability and Accountability Act) Security Rules has passed on February 2005; therefore being HIPAA compliant becomes extremely critical to healthcare providers. HIPAA mandates healthcare providers to protect the privacy and integrity of the health data and have the ability to demonstrate examples of mechanisms that can be used to accomplish this task. It is also required that a healthcare institution must be able to provide audit trails on image data access on demand for a specific patient. For these reasons, we have developed a HIPAA compliant auditing system (HCAS) for image data security in a PACS by auditing every image data access. The HCAS was presented in 2005 SPIE. This year, two new components, LDSE (Lossless Digital Signature Embedding) and LTVS (Patient Location Tracking and Verification System) logs, have been added to the HCAS. The LDSE can assure medical image integrity in a PACS, while the LTVS can provide access control for a PACS by creating a security zone in the clinical environment. By integrating the LDSE and LTVS logs with the HCAS, the privacy and integrity of image data can be audited as well. Thus, a PACS with the HCAS installed can become HIPAA compliant in image data privacy and integrity, access control, and audit control.

  1. House log drying rates in southeast Alaska for covered and uncovered softwood logs

    Treesearch

    David Nicholls; Allen Brackley

    2009-01-01

    Log moisture content has an important impact on many aspects of log home construction, including log processing, transportation costs, and dimensional stability in use. Air-drying times for house logs from freshly harvested trees can depend on numerous factors including initial moisture content, log diameter, bark condition, and environmental conditions during drying....

  2. Considering User's Access Pattern in Multimedia File Systems

    NASA Astrophysics Data System (ADS)

    Cho, KyoungWoon; Ryu, YeonSeung; Won, Youjip; Koh, Kern

    2002-12-01

    Legacy buffer cache management schemes for multimedia server are grounded at the assumption that the application sequentially accesses the multimedia file. However, user access pattern may not be sequential in some circumstances, for example, in distance learning application, where the user may exploit the VCR-like function(rewind and play) of the system and accesses the particular segments of video repeatedly in the middle of sequential playback. Such a looping reference can cause a significant performance degradation of interval-based caching algorithms. And thus an appropriate buffer cache management scheme is required in order to deliver desirable performance even under the workload that exhibits looping reference behavior. We propose Adaptive Buffer cache Management(ABM) scheme which intelligently adapts to the file access characteristics. For each opened file, ABM applies either the LRU replacement or the interval-based caching depending on the Looping Reference Indicator, which indicates that how strong temporally localized access pattern is. According to our experiment, ABM exhibits better buffer cache miss ratio than interval-based caching or LRU, especially when the workload exhibits not only sequential but also looping reference property.

  3. Privacy Act System of Records: EPA Personnel Emergency Contact Files, EPA-44

    EPA Pesticide Factsheets

    Learn about the EPA Personnel Emergency Contact Files System, including including who is covered in the system, the purpose of data collection, routine uses for the system's records, and other security procedure.

  4. Development of a Methodology for Customizing Insider Threat Auditing on a Linux Operating System

    DTIC Science & Technology

    2010-03-01

    information /etc/group, passwd ,gshadow,shadow,/security/opasswd 16 User A attempts to access User B directory 17 User A attempts to access User B file w/o...configuration Handled by audit rules for root actions Audit user write attempts to system files -w /etc/group –p wxa -w /etc/ passwd –p wxa -w /etc/gshadow –p...information (/etc/group, /etc/ passwd , /etc/gshadow, /etc/shadow, /etc/sudoers, /etc/security/opasswd) Procedure: 1. User2 logs into the system

  5. SwampLog II: A Structured Journal for Personal and Professional Inquiry within a Collaborative Environment.

    ERIC Educational Resources Information Center

    Nicassio, Frank J.

    SwampLog is a type of journal keeping that records the facts of daily activities as experienced and perceived by practitioners. The label, "SwampLog," was inspired by Donald Schon's metaphor used to distinguish the "swamplands of practice" from the "high, hard ground of research." Keeping a SwampLog consists of recording four general types of…

  6. Defects in Hardwood Veneer Logs: Their Frequency and Importance

    Treesearch

    E.S. Harrar

    1954-01-01

    Most southern hardwood veneer and plywood plants have some method of classifying logs by grade to control the purchase price paid for logs bought on the open market. Such log-grading systems have been developed by experience and are dependent to a large extent upon the ability of the grader and his knowledge of veneer grades and yields required for the specific product...

  7. DIRECT secure messaging as a common transport layer for reporting structured and unstructured lab results to outpatient providers.

    PubMed

    Sujansky, Walter; Wilson, Tom

    2015-04-01

    This report describes a grant-funded project to explore the use of DIRECT secure messaging for the electronic delivery of laboratory test results to outpatient physicians and electronic health record systems. The project seeks to leverage the inherent attributes of DIRECT secure messaging and electronic provider directories to overcome certain barriers to the delivery of lab test results in the outpatient setting. The described system enables laboratories that generate test results as HL7 messages to deliver these results as structured or unstructured documents attached to DIRECT secure messages. The system automatically analyzes generated HL7 messages and consults an electronic provider directory to determine the appropriate DIRECT address and delivery format for each indicated recipient. The system also enables lab results delivered to providers as structured attachments to be consumed by HL7 interface engines and incorporated into electronic health record systems. Lab results delivered as unstructured attachments may be printed or incorporated into patient records as PDF files. The system receives and logs acknowledgement messages to document the status of each transmitted lab result, and a graphical interface allows searching and review of this logged information. The described system is a fully implemented prototype that has been tested in a laboratory setting. Although this approach is promising, further work is required to pilot test the system in production settings with clinical laboratories and outpatient provider organizations. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Securing the AliEn File Catalogue - Enforcing authorization with accountable file operations

    NASA Astrophysics Data System (ADS)

    Schreiner, Steffen; Bagnasco, Stefano; Sankar Banerjee, Subho; Betev, Latchezar; Carminati, Federico; Vladimirovna Datskova, Olga; Furano, Fabrizio; Grigoras, Alina; Grigoras, Costin; Mendez Lorenzo, Patricia; Peters, Andreas Joachim; Saiz, Pablo; Zhu, Jianlin

    2011-12-01

    The AliEn Grid Services, as operated by the ALICE Collaboration in its global physics analysis grid framework, is based on a central File Catalogue together with a distributed set of storage systems and the possibility to register links to external data resources. This paper describes several identified vulnerabilities in the AliEn File Catalogue access protocol regarding fraud and unauthorized file alteration and presents a more secure and revised design: a new mechanism, called LFN Booking Table, is introduced in order to keep track of access authorization in the transient state of files entering or leaving the File Catalogue. Due to a simplification of the original Access Envelope mechanism for xrootd-protocol-based storage systems, fundamental computational improvements of the mechanism were achieved as well as an up to 50% reduction of the credential's size. By extending the access protocol with signed status messages from the underlying storage system, the File Catalogue receives trusted information about a file's size and checksum and the protocol is no longer dependent on client trust. Altogether, the revised design complies with atomic and consistent transactions and allows for accountable, authentic, and traceable file operations. This paper describes these changes as part and beyond the development of AliEn version 2.19.

  9. ChemEngine: harvesting 3D chemical structures of supplementary data from PDF files.

    PubMed

    Karthikeyan, Muthukumarasamy; Vyas, Renu

    2016-01-01

    Digital access to chemical journals resulted in a vast array of molecular information that is now available in the supplementary material files in PDF format. However, extracting this molecular information, generally from a PDF document format is a daunting task. Here we present an approach to harvest 3D molecular data from the supporting information of scientific research articles that are normally available from publisher's resources. In order to demonstrate the feasibility of extracting truly computable molecules from PDF file formats in a fast and efficient manner, we have developed a Java based application, namely ChemEngine. This program recognizes textual patterns from the supplementary data and generates standard molecular structure data (bond matrix, atomic coordinates) that can be subjected to a multitude of computational processes automatically. The methodology has been demonstrated via several case studies on different formats of coordinates data stored in supplementary information files, wherein ChemEngine selectively harvested the atomic coordinates and interpreted them as molecules with high accuracy. The reusability of extracted molecular coordinate data was demonstrated by computing Single Point Energies that were in close agreement with the original computed data provided with the articles. It is envisaged that the methodology will enable large scale conversion of molecular information from supplementary files available in the PDF format into a collection of ready- to- compute molecular data to create an automated workflow for advanced computational processes. Software along with source codes and instructions available at https://sourceforge.net/projects/chemengine/files/?source=navbar.Graphical abstract.

  10. Text File Comparator

    NASA Technical Reports Server (NTRS)

    Kotler, R. S.

    1983-01-01

    File Comparator program IFCOMP, is text file comparator for IBM OS/VScompatable systems. IFCOMP accepts as input two text files and produces listing of differences in pseudo-update form. IFCOMP is very useful in monitoring changes made to software at the source code level.

  11. The incidence of root microcracks caused by 3 different single-file systems versus the ProTaper system.

    PubMed

    Liu, Rui; Hou, Ben Xiang; Wesselink, Paul R; Wu, Min-Kai; Shemesh, Hagay

    2013-08-01

    The aim of this study was to compare the incidence of root cracks observed at the apical root surface and/or in the canal wall after canal instrumentation with 3 single-file systems and the ProTaper system (Dentsply Maillefer, Ballaigues, Switzerland). One hundred mandibular incisors were selected. Twenty control teeth were coronally flared with Gates-Glidden drills (Dentsply Maillefer). No further preparation was made. The other 80 teeth were mounted in resin blocks with simulated periodontal ligaments, and the apex was exposed. They were divided into 4 experimental groups (n = 20); the root canals were first coronally flared with Gates-Glidden drills and then instrumented to the full working length with the ProTaper, OneShape (Micro-Mega, Besancon, France), Reciproc (VDW, Munich, Germany), or the Self-Adjusting File (ReDent-Nova, Ra'anana, Israel). The apical root surface and horizontal sections 2, 4, and 6 mm from the apex were observed under a microscope. The presence of cracks was noted. The chi-square test was performed to compare the appearance of cracked roots between the experimental groups. No cracks were found in the control teeth and teeth instrumented with the Self-Adjusting File. Cracks were found in 10 of 20 (50%), 7 of 20 (35%), and 1 of 20 (5%) teeth after canal instrumentation with the ProTaper, OneShape, and Reciproc files, respectively. The difference between the experimental groups was statistically significant (P < .001). Nickel-titanium instruments may cause cracks on the apical root surface or in the canal wall; the Self-Adjusting File and Reciproc files caused less cracks than the ProTaper and OneShape files. Copyright © 2013 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  12. Ubiquitous Learning Project Using Life-Logging Technology in Japan

    ERIC Educational Resources Information Center

    Ogata, Hiroaki; Hou, Bin; Li, Mengmeng; Uosaki, Noriko; Mouri, Kosuke; Liu, Songran

    2014-01-01

    A Ubiquitous Learning Log (ULL) is defined as a digital record of what a learner has learned in daily life using ubiquitous computing technologies. In this paper, a project which developed a system called SCROLL (System for Capturing and Reusing Of Learning Log) is presented. The aim of developing SCROLL is to help learners record, organize,…

  13. Implementing Journaling in a Linux Shared Disk File System

    NASA Technical Reports Server (NTRS)

    Preslan, Kenneth W.; Barry, Andrew; Brassow, Jonathan; Cattelan, Russell; Manthei, Adam; Nygaard, Erling; VanOort, Seth; Teigland, David; Tilstra, Mike; O'Keefe, Matthew; hide

    2000-01-01

    In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.

  14. Using electronic document management systems to manage highway project files.

    DOT National Transportation Integrated Search

    2011-12-12

    "WisDOTs Bureau of Technical Services is interested in learning about the practices of other state departments of : transportation in developing and implementing an electronic document management system to manage highway : project files"

  15. Performance of the engineering analysis and data system 2 common file system

    NASA Technical Reports Server (NTRS)

    Debrunner, Linda S.

    1993-01-01

    The Engineering Analysis and Data System (EADS) was used from April 1986 to July 1993 to support large scale scientific and engineering computation (e.g. computational fluid dynamics) at Marshall Space Flight Center. The need for an updated system resulted in a RFP in June 1991, after which a contract was awarded to Cray Grumman. EADS II was installed in February 1993, and by July 1993 most users were migrated. EADS II is a network of heterogeneous computer systems supporting scientific and engineering applications. The Common File System (CFS) is a key component of this system. The CFS provides a seamless, integrated environment to the users of EADS II including both disk and tape storage. UniTree software is used to implement this hierarchical storage management system. The performance of the CFS suffered during the early months of the production system. Several of the performance problems were traced to software bugs which have been corrected. Other problems were associated with hardware. However, the use of NFS in UniTree UCFM software limits the performance of the system. The performance issues related to the CFS have led to a need to develop a greater understanding of the CFS organization. This paper will first describe the EADS II with emphasis on the CFS. Then, a discussion of mass storage systems will be presented, and methods of measuring the performance of the Common File System will be outlined. Finally, areas for further study will be identified and conclusions will be drawn.

  16. Uncoupling File System Components for Bridging Legacy and Modern Storage Architectures

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Tilmes, C.; Prathapan, S.; Earp, D. N.; Ashkar, J. S.

    2016-12-01

    Long running Earth Science projects can span decades of architectural changes in both processing and storage environments. As storage architecture designs change over decades such projects need to adjust their tools, systems, and expertise to properly integrate such new technologies with their legacy systems. Traditional file systems lack the necessary support to accommodate such hybrid storage infrastructure resulting in more complex tool development to encompass all possible storage architectures used for the project. The MODIS Adaptive Processing System (MODAPS) and the Level 1 and Atmospheres Archive and Distribution System (LAADS) is an example of a project spanning several decades which has evolved into a hybrid storage architecture. MODAPS/LAADS has developed the Lightweight Virtual File System (LVFS) which ensures a seamless integration of all the different storage architectures, including standard block based POSIX compliant storage disks, to object based architectures such as the S3 compliant HGST Active Archive System, and the Seagate Kinetic disks utilizing the Kinetic Protocol. With LVFS, all analysis and processing tools used for the project continue to function unmodified regardless of the underlying storage architecture enabling MODAPS/LAADS to easily integrate any new storage architecture without the costly need to modify existing tools to utilize such new systems. Most file systems are designed as a single application responsible for using metadata to organizing the data into a tree, determine the location for data storage, and a method of data retrieval. We will show how LVFS' unique approach of treating these components in a loosely coupled fashion enables it to merge different storage architectures into a single uniform storage system which bridges the underlying hybrid architecture.

  17. Log Analysis Using Splunk Hadoop Connect

    DTIC Science & Technology

    2017-06-01

    running a logging service puts a performance tax on the system and may cause the degradation of performance. More thorough 8 logging will cause a...several nodes. For example, a disk failure would affect all the tasks running on a particular node and generate an alert message not only for the disk...the commands that were executed from the " Run " command. The keylogger installation did not create any registry keys for the program itself. However

  18. The design and implementation of the HY-1B Product Archive System

    NASA Astrophysics Data System (ADS)

    Liu, Shibin; Liu, Wei; Peng, Hailong

    2010-11-01

    Product Archive System (PAS), as a background system, is the core part of the Product Archive and Distribution System (PADS) which is the center for data management of the Ground Application System of HY-1B satellite hosted by the National Satellite Ocean Application Service of China. PAS integrates a series of updating methods and technologies, such as a suitable data transmittal mode, flexible configuration files and log information in order to make the system with several desirable characteristics, such as ease of maintenance, stability, minimal complexity. This paper describes seven major components of the PAS (Network Communicator module, File Collector module, File Copy module, Task Collector module, Metadata Extractor module, Product data Archive module, Metadata catalogue import module) and some of the unique features of the system, as well as the technical problems encountered and resolved.

  19. Assessing spatial uncertainty in reservoir characterization for carbon sequestration planning using public well-log data: A case study

    USGS Publications Warehouse

    Venteris, E.R.; Carter, K.M.

    2009-01-01

    Mapping and characterization of potential geologic reservoirs are key components in planning carbon dioxide (CO2) injection projects. The geometry of target and confining layers is vital to ensure that the injected CO2 remains in a supercritical state and is confined to the target layer. Also, maps of injection volume (porosity) are necessary to estimate sequestration capacity at undrilled locations. Our study uses publicly filed geophysical logs and geostatistical modeling methods to investigate the reliability of spatial prediction for oil and gas plays in the Medina Group (sandstone and shale facies) in northwestern Pennsylvania. Specifically, the modeling focused on two targets: the Grimsby Formation and Whirlpool Sandstone. For each layer, thousands of data points were available to model structure and thickness but only hundreds were available to support volumetric modeling because of the rarity of density-porosity logs in the public records. Geostatistical analysis based on this data resulted in accurate structure models, less accurate isopach models, and inconsistent models of pore volume. Of the two layers studied, only the Whirlpool Sandstone data provided for a useful spatial model of pore volume. Where reliable models for spatial prediction are absent, the best predictor available for unsampled locations is the mean value of the data, and potential sequestration sites should be planned as close as possible to existing wells with volumetric data. ?? 2009. The American Association of Petroleum Geologists/Division of Environmental Geosciences. All rights reserved.

  20. Long-term recovery of a Mountain Stream from Clearcut Logging: The Effects of Forest Succession on Benthic Invertebrate Community Structure

    Treesearch

    Michael K. Stone; J. Bruce Wallace

    1998-01-01

    Summary1. Changes in benthic invertebrate community structure following 16 years of forest succession after logging were examined by estimating benthic invertebrate abundance, biomass and secondary production in streams draining a forested reference and a recovering clear-cut catchment. Benthic invertebrate abundance was three times higher,...

  1. A graphical automated detection system to locate hardwood log surface defects using high-resolution three-dimensional laser scan data

    Treesearch

    Liya Thomas; R. Edward Thomas

    2011-01-01

    We have developed an automated defect detection system and a state-of-the-art Graphic User Interface (GUI) for hardwood logs. The algorithm identifies defects at least 0.5 inch high and at least 3 inches in diameter on barked hardwood log and stem surfaces. To summarize defect features and to build a knowledge base, hundreds of defects were measured, photographed, and...

  2. A new method of evaluating tight gas sands pore structure from nuclear magnetic resonance (NMR) logs

    NASA Astrophysics Data System (ADS)

    Xiao, Liang; Mao, Zhi-qiang; Xie, Xiu-hong

    2016-04-01

    Tight gas sands always display such characteristics of ultra-low porosity, permeability, high irreducible water, low resistivity contrast, complicated pore structure and strong heterogeneity, these make that the conventional methods are invalid. Many effective gas bearing formations are considered as dry zones or water saturated layers, and cannot be identified and exploited. To improve tight gas sands evaluation, the best method is quantitative characterizing rock pore structure. The mercury injection capillary pressure (MICP) curves are advantageous in predicting formation pore structure. However, the MICP experimental measurements are limited due to the environment and economy factors, this leads formation pore structure cannot be consecutively evaluated. Nuclear magnetic resonance (NMR) logs are considered to be promising in evaluating rock pore structure. Generally, to consecutively quantitatively evaluate tight gas sands pore structure, the best method is constructing pseudo Pc curves from NMR logs. In this paper, based on the analysis of lab experimental results for 20 core samples, which were drilled from tight gas sandstone reservoirs of Sichuan basin, and simultaneously applied for lab MICP and NMR measurements, the relationships of piecewise power function between nuclear magnetic resonance (NMR) transverse relaxation T2 time and pore-throat radius Rc are established. A novel method, which is used to transform NMR reverse cumulative curve as pseudo capillary pressure (Pc) curve is proposed, and the corresponding model is established based on formation classification. By using this model, formation pseudo Pc curves can be consecutively synthesized. The pore throat radius distribution, and pore structure evaluation parameters, such as the average pore throat radius (Rm), the threshold pressure (Pd), the maximum pore throat radius (Rmax) and so on, can also be precisely extracted. After this method is extended into field applications, several tight gas

  3. Apically extruded debris with reciprocating single-file and full-sequence rotary instrumentation systems.

    PubMed

    Bürklein, Sebastian; Schäfer, Edgar

    2012-06-01

    The purpose of this in vitro study was to assess the amount of apically extruded debris using rotary and reciprocating nickel-titanium instrumentation systems. Eighty human mandibular central incisors were randomly assigned to 4 groups (n = 20 teeth per group). The root canals were instrumented according to the manufacturers' instructions using the 2 reciprocating single-file systems Reciproc (VDW, Munich, Germany) and WaveOne (Dentsply Maillefer, Ballaigues, Switzerland) and the 2 full-sequence rotary Mtwo (VDW, Munich, Germany) and ProTaper (Dentsply Maillefer, Ballaigues, Switzerland) instruments. Bidistilled water was used as irrigant. The apically extruded debris was collected in preweighted glass vials using the Myers and Montgomery method. After drying, the mean weight of debris was assessed with a microbalance and statistically analyzed using analysis of variance and the post hoc Student-Newman-Keuls test. The time required to prepare the canals with the different instruments was also recorded. The reciprocating files produced significantly more debris compared with both rotary systems (P < .05). Although no statistically significant difference was obtained between the 2 rotary instruments (P > .05), the reciprocating single-file system Reciproc produced significantly more debris compared with all other instruments (P < .05). Instrumentation was significantly faster using Reciproc than with all other instrument (P < .05). Under the condition of this study, all systems caused apical debris extrusion. Full-sequence rotary instrumentation was associated with less debris extrusion compared with the use of reciprocating single-file systems. Copyright © 2012 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  4. Research and development of a digital design system for hull structures

    NASA Astrophysics Data System (ADS)

    Zhan, Yi-Ting; Ji, Zhuo-Shang; Liu, Yin-Dong

    2007-06-01

    Methods used for digital ship design were studied and formed the basis of a proposed frame model suitable for ship construction modeling. Based on 3-D modeling software, a digital design system for hull structures was developed. Basic software systems for modeling, modifying, and assembly simulation were developed. The system has good compatibility, and models created by it can be saved in different 3-D file formats, and 2D engineering drawings can be output directly. The model can be modified dynamically, overcoming the necessity of repeated modifications during hull structural design. Through operations such as model construction, intervention inspection, and collision detection, problems can be identified and modified during the hull structural design stage. Technologies for centralized control of the system, database management, and 3-D digital design are integrated into this digital model in the preliminary design stage of shipbuilding.

  5. Water Log.

    ERIC Educational Resources Information Center

    Science Activities, 1995

    1995-01-01

    Presents a Project WET water education activity. Students use a Water Log (journal or portfolio) to write or illustrate their observations, feelings, and actions related to water. The log serves as an assessment tool to monitor changes over time in knowledge of and attitudes toward the water. (LZ)

  6. Integrating PCLIPS into ULowell's Lincoln Logs: Factory of the future

    NASA Technical Reports Server (NTRS)

    Mcgee, Brenda J.; Miller, Mark D.; Krolak, Patrick; Barr, Stanley J.

    1990-01-01

    We are attempting to show how independent but cooperating expert systems, executing within a parallel production system (PCLIPS), can operate and control a completely automated, fault tolerant prototype of a factory of the future (The Lincoln Logs Factory of the Future). The factory consists of a CAD system for designing the Lincoln Log Houses, two workcells, and a materials handling system. A workcell consists of two robots, part feeders, and a frame mounted vision system.

  7. Requirements-Driven Log Analysis Extended Abstract

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus

    2012-01-01

    Imagine that you are tasked to help a project improve their testing effort. In a realistic scenario it will quickly become clear, that having an impact is diffcult. First of all, it will likely be a challenge to suggest an alternative approach which is significantly more automated and/or more effective than current practice. The reality is that an average software system has a complex input/output behavior. An automated testing approach will have to auto-generate test cases, each being a pair (i; o) consisting of a test input i and an oracle o. The test input i has to be somewhat meaningful, and the oracle o can be very complicated to compute. Second, even in case where some testing technology has been developed that might improve current practice, it is then likely difficult to completely change the current behavior of the testing team unless the technique is obviously superior and does everything already done by existing technology. So is there an easier way to incorporate formal methods-based approaches than the full edged test revolution? Fortunately the answer is affirmative. A relatively simple approach is to benefit from possibly already existing logging infrastructure, which after all is part of most systems put in production. A log is a sequence of events, generated by special log recording statements, most often manually inserted in the code by the programmers. An event can be considered as a data record: a mapping from field names to values. We can analyze such a log using formal methods, for example checking it against a formal specification. This separates running the system for analyzing its behavior. It is not meant as an alternative to testing since it does not address the important in- put generation problem. However, it offers a solution which testing teams might accept since it has low impact on the existing process. A single person might be assigned to perform such log analysis, compared to the entire testing team changing behavior.

  8. 48 CFR 1404.802 - Contract files.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Contract files. 1404.802 Section 1404.802 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.802 Contract files. In addition to the requirements in FAR 4.802, files shall...

  9. 48 CFR 1404.802 - Contract files.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Contract files. 1404.802 Section 1404.802 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.802 Contract files. In addition to the requirements in FAR 4.802, files shall...

  10. 48 CFR 1404.802 - Contract files.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Contract files. 1404.802 Section 1404.802 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.802 Contract files. In addition to the requirements in FAR 4.802, files shall...

  11. 48 CFR 1404.802 - Contract files.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Contract files. 1404.802 Section 1404.802 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.802 Contract files. In addition to the requirements in FAR 4.802, files shall...

  12. 48 CFR 1404.802 - Contract files.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Contract files. 1404.802 Section 1404.802 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.802 Contract files. In addition to the requirements in FAR 4.802, files shall...

  13. A 3D visualization system for molecular structures

    NASA Technical Reports Server (NTRS)

    Green, Terry J.

    1989-01-01

    The properties of molecules derive in part from their structures. Because of the importance of understanding molecular structures various methodologies, ranging from first principles to empirical technique, were developed for computing the structure of molecules. For large molecules such as polymer model compounds, the structural information is difficult to comprehend by examining tabulated data. Therefore, a molecular graphics display system, called MOLDS, was developed to help interpret the data. MOLDS is a menu-driven program developed to run on the LADC SNS computer systems. This program can read a data file generated by the modeling programs or data can be entered using the keyboard. MOLDS has the following capabilities: draws the 3-D representation of a molecule using stick, ball and ball, or space filled model from Cartesian coordinates, draws different perspective views of the molecule; rotates the molecule on the X, Y, Z axis or about some arbitrary line in space, zooms in on a small area of the molecule in order to obtain a better view of a specific region; and makes hard copy representation of molecules on a graphic printer. In addition, MOLDS can be easily updated and readily adapted to run on most computer systems.

  14. Utilization and cost of log production from animal loging operations

    Treesearch

    Suraj P. Shrestha; Bobby L. Lanford; Robert B. Rummer; Mark Dubois

    2006-01-01

    Forest harvesting with animals is a labor-intensive operation. It is expensive to use machines on smaller woodlots, which require frequent moves if mechanically logged. So, small logging systems using animals may be more cost effective. In this study, work sampling was used for five animal logging operations in Alabama to measure productive and non-productive time...

  15. 75 FR 48629 - Electronic Tariff Filing System (ETFS)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-11

    ...In this document, the Federal Communications Commission (Commission) seeks comment on extending the electronic tariff filing requirement for incumbent local exchange carriers to all carriers that file tariffs and related documents. Additionally, the Commission seeks comment on the appropriate time frame for implementing this proposed requirement. The Commission also seeks comment on the proposal that the Chief of the Wireline Competition Bureau administer the adoption of this extended electronic filing requirement. Also, the Commission seeks comment on proposed rule changes to implement mandatory electronic tariff filing.

  16. 76 FR 9780 - Notification of Deletion of System of Records; EPA Parking Control Office File (EPA-10) and EPA...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-22

    ... System of Records; EPA Parking Control Office File (EPA-10) and EPA Transit and Guaranteed Ride Home Program Files (EPA-35) AGENCY: Environmental Protection Agency (EPA). ACTION: Notice. SUMMARY: The Environmental Protection Agency (EPA) is deleting the systems of records for EPA Parking Control Office File...

  17. 78 FR 63159 - Amendment to Certification of Nebraska's Central Filing System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-23

    ... system for Nebraska to permit the conversion of all debtor social security and taxpayer identification... automatically convert social security numbers and taxpayer identification numbers into ten number unique... certified central filing systems is available through the Internet on the GIPSA Web site ( http://www.gipsa...

  18. 48 CFR 204.802 - Contract files.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Contract files. 204.802 Section 204.802 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.802 Contract files. Official contract...

  19. 48 CFR 204.802 - Contract files.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Contract files. 204.802 Section 204.802 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.802 Contract files. Official contract...

  20. 48 CFR 204.802 - Contract files.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Contract files. 204.802 Section 204.802 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.802 Contract files. Official contract...