Sample records for cms tier-1 computing

  1. CMS results in the Combined Computing Readiness Challenge CCRC'08

    NASA Astrophysics Data System (ADS)

    Bonacorsi, D.; Bauerdick, L.; CMS Collaboration

    2009-12-01

    During February and May 2008, CMS participated to the Combined Computing Readiness Challenge (CCRC'08) together with all other LHC experiments. The purpose of this worldwide exercise was to check the readiness of the Computing infrastructure for LHC data taking. Another set of major CMS tests called Computing, Software and Analysis challenge (CSA'08) - as well as CMS cosmic runs - were also running at the same time: CCRC augmented the load on computing with additional tests to validate and stress-test all CMS computing workflows at full data taking scale, also extending this to the global WLCG community. CMS exercised most aspects of the CMS computing model, with very comprehensive tests. During May 2008, CMS moved more than 3.6 Petabytes among more than 300 links in the complex Grid topology. CMS demonstrated that is able to safely move data out of CERN to the Tier-1 sites, sustaining more than 600 MB/s as a daily average for more than seven days in a row, with enough headroom and with hourly peaks of up to 1.7 GB/s. CMS ran hundreds of simultaneous jobs at each Tier-1 site, re-reconstructing and skimming hundreds of millions of events. After re-reconstruction the fresh AOD (Analysis Object Data) has to be synchronized between Tier-1 centers: CMS demonstrated that the required inter-Tier-1 transfers are achievable within a few days. CMS also showed that skimmed analysis data sets can be transferred to Tier-2 sites for analysis at sufficient rate, regionally as well as inter-regionally, achieving all goals in about 90% of >200 links. Simultaneously, CMS also ran a large Tier-2 analysis exercise, where realistic analysis jobs were submitted to a large set of Tier-2 sites by a large number of people to produce a chaotic workload across the systems, and with more than 400 analysis users in May. Taken all together, CMS routinely achieved submissions of 100k jobs/day, with peaks up to 200k jobs/day. The achieved results in CCRC'08 - focussing on the distributed workflows - are presented and discussed.

  2. Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

    NASA Astrophysics Data System (ADS)

    Letts, J.; Magini, N.

    2011-12-01

    Tier-2 to Tier-2 data transfers have been identified as a necessary extension of the CMS computing model. The Debugging Data Transfers (DDT) Task Force in CMS was charged with commissioning Tier-2 to Tier-2 PhEDEx transfer links beginning in late 2009, originally to serve the needs of physics analysis groups for the transfer of their results between the storage elements of the Tier-2 sites associated with the groups. PhEDEx is the data transfer middleware of the CMS experiment. For analysis jobs using CRAB, the CMS Remote Analysis Builder, the challenges of remote stage out of job output at the end of the analysis jobs led to the introduction of a local fallback stage out, and will eventually require the asynchronous transfer of user data over essentially all of the Tier-2 to Tier-2 network using the same PhEDEx infrastructure. In addition, direct file sharing of physics and Monte Carlo simulated data between Tier-2 sites can relieve the operational load of the Tier-1 sites in the original CMS Computing Model, and already represents an important component of CMS PhEDEx data transfer volume. The experience, challenges and methods used to debug and commission the thousands of data transfers links between CMS Tier-2 sites world-wide are explained and summarized. The resulting operational experience with Tier-2 to Tier-2 transfers is also presented.

  3. From the CMS Computing Experience in the WLCG STEP'09 Challenge to the First Data Taking of the LHC Era

    NASA Astrophysics Data System (ADS)

    Bonacorsi, D.; Gutsche, O.

    The Worldwide LHC Computing Grid (WLCG) project decided in March 2009 to perform scale tests of parts of its overall Grid infrastructure before the start of the LHC data taking. The "Scale Test for the Experiment Program" (STEP'09) was performed mainly in June 2009 -with more selected tests in September- October 2009 -and emphasized the simultaneous test of the computing systems of all 4 LHC experiments. CMS tested its Tier-0 tape writing and processing capabilities. The Tier-1 tape systems were stress tested using the complete range of Tier-1 work-flows: transfer from Tier-0 and custody of data on tape, processing and subsequent archival, redistribution of datasets amongst all Tier-1 sites as well as burst transfers of datasets to Tier-2 sites. The Tier-2 analysis capacity was tested using bulk analysis job submissions to backfill normal user activity. In this talk, we will report on the different performed tests and present their post-mortem analysis.

  4. CMS tier structure and operation of the experiment-specific tasks in Germany

    NASA Astrophysics Data System (ADS)

    Nowack, A.

    2008-07-01

    In Germany, several university institutes and research centres take part in the CMS experiment. Concerning the data analysis, a couple of computing centres at different Tier levels, ranging from Tier 1 to Tier 3, exists at these places. The German Tier 1 centre GridKa at the research centre at Karlsruhe serves all four LHC experiments as well as four non-LHC experiments. With respect to the CMS experiment, GridKa is mainly involved in central tasks. The Tier 2 centre in Germany consists of two sites, one at the research centre DESY at Hamburg and one at RWTH Aachen University, forming a federated Tier 2 centre. Both parts cover different aspects of a Tier 2 centre. The German Tier 3 centres are located at the research centre DESY at Hamburg, at RWTH Aachen University, and at the University of Karlsruhe. Furthermore the building of a German user analysis facility is planned. Since the CMS community in German is rather small, a good cooperation between the different sites is essential. This cooperation includes physical topics as well as technical and operational issues. All available communication channels such as email, phone, monthly video conferences, and regular personal meetings are used. For example, the distribution of data sets is coordinated globally within Germany. Also the CMS-specific services such as the data transfer tool PhEDEx or the Monte Carlo production are operated by people from different sites in order to spread the knowledge widely and increase the redundancy in terms of operators.

  5. The diverse use of clouds by CMS

    DOE PAGES

    Andronis, Anastasios; Bauer, Daniela; Chaze, Olivier; ...

    2015-12-23

    The resources CMS is using are increasingly being offered as clouds. In Run 2 of the LHC the majority of CMS CERN resources, both in Meyrin and at the Wigner Computing Centre, will be presented as cloud resources on which CMS will have to build its own infrastructure. This infrastructure will need to run all of the CMS workflows including: Tier 0, production and user analysis. In addition, the CMS High Level Trigger will provide a compute resource comparable in scale to the total offered by the CMS Tier 1 sites, when it is not running as part of themore » trigger system. During these periods a cloud infrastructure will be overlaid on this resource, making it accessible for general CMS use. Finally, CMS is starting to utilise cloud resources being offered by individual institutes and is gaining experience to facilitate the use of opportunistically available cloud resources. Lastly, we present a snap shot of this infrastructure and its operation at the time of the CHEP2015 conference.« less

  6. The CMS Tier0 goes cloud and grid for LHC Run 2

    DOE PAGES

    Hufnagel, Dirk

    2015-12-23

    In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system utilizing both the CERN Cloud-based Agile Infrastructure and a significant fraction of the CMS Tier-1 Grid resources. In another big change for LHC Run 2, we will process all data using the multi-threadedmore » framework to deal with the increased event complexity and to ensure efficient use of the resources. Furthermore, this contribution will cover the evolution of the Tier-0 infrastructure and present scale testing results and experiences from the first data taking in 2015.« less

  7. The CMS TierO goes Cloud and Grid for LHC Run 2

    NASA Astrophysics Data System (ADS)

    Hufnagel, Dirk

    2015-12-01

    In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system utilizing both the CERN Cloud-based Agile Infrastructure and a significant fraction of the CMS Tier-1 Grid resources. In another big change for LHC Run 2, we will process all data using the multi-threaded framework to deal with the increased event complexity and to ensure efficient use of the resources. This contribution will cover the evolution of the Tier-0 infrastructure and present scale testing results and experiences from the first data taking in 2015.

  8. Exploiting analytics techniques in CMS computing monitoring

    NASA Astrophysics Data System (ADS)

    Bonacorsi, D.; Kuznetsov, V.; Magini, N.; Repečka, A.; Vaandering, E.

    2017-10-01

    The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster for further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.

  9. Scaling up a CMS tier-3 site with campus resources and a 100 Gb/s network connection: what could go wrong?

    NASA Astrophysics Data System (ADS)

    Wolf, Matthias; Woodard, Anna; Li, Wenzhao; Hurtado Anampa, Kenyi; Tovar, Benjamin; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2017-10-01

    The University of Notre Dame (ND) CMS group operates a modest-sized Tier-3 site suitable for local, final-stage analysis of CMS data. However, through the ND Center for Research Computing (CRC), Notre Dame researchers have opportunistic access to roughly 25k CPU cores of computing and a 100 Gb/s WAN network link. To understand the limits of what might be possible in this scenario, we undertook to use these resources for a wide range of CMS computing tasks from user analysis through large-scale Monte Carlo production (including both detector simulation and data reconstruction.) We will discuss the challenges inherent in effectively utilizing CRC resources for these tasks and the solutions deployed to overcome them.

  10. Exploiting Analytics Techniques in CMS Computing Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonacorsi, D.; Kuznetsov, V.; Magini, N.

    The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster formore » further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.« less

  11. Pooling the resources of the CMS Tier-1 sites

    DOE PAGES

    Apyan, A.; Badillo, J.; Cruz, J. Diaz; ...

    2015-12-23

    The CMS experiment at the LHC relies on 7 Tier-1 centres of the WLCG to perform the majority of its bulk processing activity, and to archive its data. During the first run of the LHC, these two functions were tightly coupled as each Tier-1 was constrained to process only the data archived on its hierarchical storage. This lack of flexibility in the assignment of processing workflows occasionally resulted in uneven resource utilisation and in an increased latency in the delivery of the results to the physics community.The long shutdown of the LHC in 2013-2014 was an opportunity to revisit thismore » mode of operations, disentangling the processing and archive functionalities of the Tier-1 centres. The storage services at the Tier-1s were redeployed breaking the traditional hierarchical model: each site now provides a large disk storage to host input and output data for processing, and an independent tape storage used exclusively for archiving. Movement of data between the tape and disk endpoints is not automated, but triggered externally through the WLCG transfer management systems.With this new setup, CMS operations actively controls at any time which data is available on disk for processing and which data should be sent to archive. Thanks to the high-bandwidth connectivity guaranteed by the LHCOPN, input data can be freely transferred between disk endpoints as needed to take advantage of free CPU, turning the Tier-1s into a large pool of shared resources. The output data can be validated before archiving them permanently, and temporary data formats can be produced without wasting valuable tape resources. Lastly, the data hosted on disk at Tier-1s can now be made available also for user analysis since there is no risk any longer of triggering chaotic staging from tape.In this contribution, we describe the technical solutions adopted for the new disk and tape endpoints at the sites, and we report on the commissioning and scale testing of the service. We detail the procedures implemented by CMS computing operations to actively manage data on disk at Tier-1 sites, and we give examples of the benefits brought to CMS workflows by the additional flexibility of the new system.« less

  12. Pooling the resources of the CMS Tier-1 sites

    NASA Astrophysics Data System (ADS)

    Apyan, A.; Badillo, J.; Diaz Cruz, J.; Gadrat, S.; Gutsche, O.; Holzman, B.; Lahiff, A.; Magini, N.; Mason, D.; Perez, A.; Stober, F.; Taneja, S.; Taze, M.; Wissing, C.

    2015-12-01

    The CMS experiment at the LHC relies on 7 Tier-1 centres of the WLCG to perform the majority of its bulk processing activity, and to archive its data. During the first run of the LHC, these two functions were tightly coupled as each Tier-1 was constrained to process only the data archived on its hierarchical storage. This lack of flexibility in the assignment of processing workflows occasionally resulted in uneven resource utilisation and in an increased latency in the delivery of the results to the physics community. The long shutdown of the LHC in 2013-2014 was an opportunity to revisit this mode of operations, disentangling the processing and archive functionalities of the Tier-1 centres. The storage services at the Tier-1s were redeployed breaking the traditional hierarchical model: each site now provides a large disk storage to host input and output data for processing, and an independent tape storage used exclusively for archiving. Movement of data between the tape and disk endpoints is not automated, but triggered externally through the WLCG transfer management systems. With this new setup, CMS operations actively controls at any time which data is available on disk for processing and which data should be sent to archive. Thanks to the high-bandwidth connectivity guaranteed by the LHCOPN, input data can be freely transferred between disk endpoints as needed to take advantage of free CPU, turning the Tier-1s into a large pool of shared resources. The output data can be validated before archiving them permanently, and temporary data formats can be produced without wasting valuable tape resources. Finally, the data hosted on disk at Tier-1s can now be made available also for user analysis since there is no risk any longer of triggering chaotic staging from tape. In this contribution, we describe the technical solutions adopted for the new disk and tape endpoints at the sites, and we report on the commissioning and scale testing of the service. We detail the procedures implemented by CMS computing operations to actively manage data on disk at Tier-1 sites, and we give examples of the benefits brought to CMS workflows by the additional flexibility of the new system.

  13. CMS Readiness for Multi-Core Workload Scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.

    In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides amore » solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.« less

  14. CMS readiness for multi-core workload scheduling

    NASA Astrophysics Data System (ADS)

    Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.; Aftab Khan, F.; Letts, J.; Mason, D.; Verguilov, V.

    2017-10-01

    In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides a solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.

  15. Stability and Scalability of the CMS Global Pool: Pushing HTCondor and GlideinWMS to New Limits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balcas, J.; Bockelman, B.; Hufnagel, D.

    The CMS Global Pool, based on HTCondor and glideinWMS, is the main computing resource provisioning system for all CMS workflows, including analysis, Monte Carlo production, and detector data reprocessing activities. The total resources at Tier-1 and Tier-2 grid sites pledged to CMS exceed 100,000 CPU cores, while another 50,000 to 100,000 CPU cores are available opportunistically, pushing the needs of the Global Pool to higher scales each year. These resources are becoming more diverse in their accessibility and configuration over time. Furthermore, the challenge of stably running at higher and higher scales while introducing new modes of operation such asmore » multi-core pilots, as well as the chaotic nature of physics analysis workflows, places huge strains on the submission infrastructure. This paper details some of the most important challenges to scalability and stability that the CMS Global Pool has faced since the beginning of the LHC Run II and how they were overcome.« less

  16. Stability and scalability of the CMS Global Pool: Pushing HTCondor and glideinWMS to new limits

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B.; Hufnagel, D.; Hurtado Anampa, K.; Aftab Khan, F.; Larson, K.; Letts, J.; Marra da Silva, J.; Mascheroni, M.; Mason, D.; Perez-Calero Yzquierdo, A.; Tiradani, A.

    2017-10-01

    The CMS Global Pool, based on HTCondor and glideinWMS, is the main computing resource provisioning system for all CMS workflows, including analysis, Monte Carlo production, and detector data reprocessing activities. The total resources at Tier-1 and Tier-2 grid sites pledged to CMS exceed 100,000 CPU cores, while another 50,000 to 100,000 CPU cores are available opportunistically, pushing the needs of the Global Pool to higher scales each year. These resources are becoming more diverse in their accessibility and configuration over time. Furthermore, the challenge of stably running at higher and higher scales while introducing new modes of operation such as multi-core pilots, as well as the chaotic nature of physics analysis workflows, places huge strains on the submission infrastructure. This paper details some of the most important challenges to scalability and stability that the CMS Global Pool has faced since the beginning of the LHC Run II and how they were overcome.

  17. The Status of the Cms Experiment

    NASA Astrophysics Data System (ADS)

    Green, Dan

    The CMS experiment was completely assembled in the fall of 2008 after a decade of design, construction and installation. During the last two years, cosmic ray data were taken on a regular basis. These data have enabled CMS to align the detector components, both spatially and temporally. Initial use of muons has also established the relative alignment of the CMS tracking and muon systems. In addition, the CMS calorimetry has been crosschecked with test beam data, thus providing an initial energy calibration of CMS calorimetry to about 5%. The CMS magnet has been powered and field mapped. The trigger and data acquisition systems have been installed and run at full speed. The tiered data analysis system has been exercised at full design bandwidth for Tier0, Tier1 and Tier2 sites. Monte Carlo simulation of the CMS detector has been constructed at a detailed geometric level and has been tuned to test beam and other production data to provide a realistic model of the CMS detector prior to first collisions.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apyan, A.; Badillo, J.; Cruz, J. Diaz

    The CMS experiment at the LHC relies on 7 Tier-1 centres of the WLCG to perform the majority of its bulk processing activity, and to archive its data. During the first run of the LHC, these two functions were tightly coupled as each Tier-1 was constrained to process only the data archived on its hierarchical storage. This lack of flexibility in the assignment of processing workflows occasionally resulted in uneven resource utilisation and in an increased latency in the delivery of the results to the physics community.The long shutdown of the LHC in 2013-2014 was an opportunity to revisit thismore » mode of operations, disentangling the processing and archive functionalities of the Tier-1 centres. The storage services at the Tier-1s were redeployed breaking the traditional hierarchical model: each site now provides a large disk storage to host input and output data for processing, and an independent tape storage used exclusively for archiving. Movement of data between the tape and disk endpoints is not automated, but triggered externally through the WLCG transfer management systems.With this new setup, CMS operations actively controls at any time which data is available on disk for processing and which data should be sent to archive. Thanks to the high-bandwidth connectivity guaranteed by the LHCOPN, input data can be freely transferred between disk endpoints as needed to take advantage of free CPU, turning the Tier-1s into a large pool of shared resources. The output data can be validated before archiving them permanently, and temporary data formats can be produced without wasting valuable tape resources. Lastly, the data hosted on disk at Tier-1s can now be made available also for user analysis since there is no risk any longer of triggering chaotic staging from tape.In this contribution, we describe the technical solutions adopted for the new disk and tape endpoints at the sites, and we report on the commissioning and scale testing of the service. We detail the procedures implemented by CMS computing operations to actively manage data on disk at Tier-1 sites, and we give examples of the benefits brought to CMS workflows by the additional flexibility of the new system.« less

  19. Understanding the T2 traffic in CMS during Run-1

    NASA Astrophysics Data System (ADS)

    T, Wildish

    2015-12-01

    In the run-up to Run-1 CMS was operating its facilities according to the MONARC model, where data-transfers were strictly hierarchical in nature. Direct transfers between Tier-2 nodes was excluded, being perceived as operationally intensive and risky in an era where the network was expected to be a major source of errors. By the end of Run-1 wide-area networks were more capable and stable than originally anticipated. The original data-placement model was relaxed, and traffic was allowed between Tier-2 nodes. Tier-2 to Tier-2 traffic in 2012 already exceeded the amount of Tier-2 to Tier-1 traffic, so it clearly has the potential to become important in the future. Moreover, while Tier-2 to Tier-1 traffic is mostly upload of Monte Carlo data, the Tier-2 to Tier-2 traffic represents data moved in direct response to requests from the physics analysis community. As such, problems or delays there are more likely to have a direct impact on the user community. Tier-2 to Tier-2 traffic may also traverse parts of the WAN that are at the 'edge' of our network, with limited network capacity or reliability compared to, say, the Tier-0 to Tier-1 traffic which goes the over LHCOPN network. CMS is looking to exploit technologies that allow us to interact with the network fabric so that it can manage our traffic better for us, this we hope to achieve before the end of Run-2. Tier-2 to Tier-2 traffic would be the most interesting use-case for such traffic management, precisely because it is close to the users' analysis and far from the 'core' network infrastructure. As such, a better understanding of our Tier-2 to Tier-2 traffic is important. Knowing the characteristics of our data-flows can help us place our data more intelligently. Knowing how widely the data moves can help us anticipate the requirements for network capacity, and inform the dynamic data placement algorithms we expect to have in place for Run-2. This paper presents an analysis of the CMS Tier-2 traffic during Run 1.

  20. First Experiences with CMS Data Storage on the GEMSS System at the INFN-CNAF Tier-1

    NASA Astrophysics Data System (ADS)

    Andreotti, D.; Bonacorsi, D.; Cavalli, A.; Pra, S. Dal; Dell'Agnello, L.; Forti, Alberto; Grandi, C.; Gregori, D.; Gioi, L. Li; Martelli, B.; Prosperini, A.; Ricci, P. P.; Ronchieri, Elisabetta; Sapunenko, V.; Sartirana, A.; Vagnoni, V.; Zappi, Riccardo

    A brand new Mass Storage System solution called "Grid-Enabled Mass Storage System" (GEMSS) -based on the Storage Resource Manager (StoRM) developed by INFN, on the General Parallel File System by IBM and on the Tivoli Storage Manager by IBM -has been tested and deployed at the INFNCNAF Tier-1 Computing Centre in Italy. After a successful stress test phase, the solution is now being used in production for the data custodiality of the CMS experiment at CNAF. All data previously recorded on the CASTOR system have been transferred to GEMSS. As final validation of the GEMSS system, some of the computing tests done in the context of the WLCG "Scale Test for the Experiment Program" (STEP'09) challenge were repeated in September-October 2009 and compared with the results previously obtained with CASTOR in June 2009. In this paper, the GEMSS system basics, the stress test activity and the deployment phase -as well as the reliability and performance of the system -are overviewed. The experiences in the use of GEMSS at CNAF in preparing for the first months of data taking of the CMS experiment at the Large Hadron Collider are also presented.

  1. Storage element performance optimization for CMS analysis jobs

    NASA Astrophysics Data System (ADS)

    Behrmann, G.; Dahlblom, J.; Guldmyr, J.; Happonen, K.; Lindén, T.

    2012-12-01

    Tier-2 computing sites in the Worldwide Large Hadron Collider Computing Grid (WLCG) host CPU-resources (Compute Element, CE) and storage resources (Storage Element, SE). The vast amount of data that needs to processed from the Large Hadron Collider (LHC) experiments requires good and efficient use of the available resources. Having a good CPU efficiency for the end users analysis jobs requires that the performance of the storage system is able to scale with I/O requests from hundreds or even thousands of simultaneous jobs. In this presentation we report on the work on improving the SE performance at the Helsinki Institute of Physics (HIP) Tier-2 used for the Compact Muon Experiment (CMS) at the LHC. Statistics from CMS grid jobs are collected and stored in the CMS Dashboard for further analysis, which allows for easy performance monitoring by the sites and by the CMS collaboration. As part of the monitoring framework CMS uses the JobRobot which sends every four hours 100 analysis jobs to each site. CMS also uses the HammerCloud tool for site monitoring and stress testing and it has replaced the JobRobot. The performance of the analysis workflow submitted with JobRobot or HammerCloud can be used to track the performance due to site configuration changes, since the analysis workflow is kept the same for all sites and for months in time. The CPU efficiency of the JobRobot jobs at HIP was increased approximately by 50 % to more than 90 %, by tuning the SE and by improvements in the CMSSW and dCache software. The performance of the CMS analysis jobs improved significantly too. Similar work has been done on other CMS Tier-sites, since on average the CPU efficiency for CMSSW jobs has increased during 2011. Better monitoring of the SE allows faster detection of problems, so that the performance level can be kept high. The next storage upgrade at HIP consists of SAS disk enclosures which can be stress tested on demand with HammerCloud workflows, to make sure that the I/O-performance is good.

  2. Grid site availability evaluation and monitoring at CMS

    DOE PAGES

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; ...

    2017-10-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less

  3. Grid site availability evaluation and monitoring at CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less

  4. Grid site availability evaluation and monitoring at CMS

    NASA Astrophysics Data System (ADS)

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; Lammel, Stephan; Sciabà, Andrea

    2017-10-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impact data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.

  5. Challenging data and workload management in CMS Computing with network-aware systems

    NASA Astrophysics Data System (ADS)

    D, Bonacorsi; T, Wildish

    2014-06-01

    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of Intelligent Network Services, including also bandwidth on demand concepts. In this paper, we will review the work done in CMS on this, and the next steps.

  6. Evolution of CMS workload management towards multicore job support

    NASA Astrophysics Data System (ADS)

    Pérez-Calero Yzquierdo, A.; Hernández, J. M.; Khan, F. A.; Letts, J.; Majewski, K.; Rodrigues, A. M.; McCrea, A.; Vaandering, E.

    2015-12-01

    The successful exploitation of multicore processor architectures is a key element of the LHC distributed computing system in the coming era of the LHC Run 2. High-pileup complex-collision events represent a challenge for the traditional sequential programming in terms of memory and processing time budget. The CMS data production and processing framework is introducing the parallel execution of the reconstruction and simulation algorithms to overcome these limitations. CMS plans to execute multicore jobs while still supporting singlecore processing for other tasks difficult to parallelize, such as user analysis. The CMS strategy for job management thus aims at integrating single and multicore job scheduling across the Grid. This is accomplished by employing multicore pilots with internal dynamic partitioning of the allocated resources, capable of running payloads of various core counts simultaneously. An extensive test programme has been conducted to enable multicore scheduling with the various local batch systems available at CMS sites, with the focus on the Tier-0 and Tier-1s, responsible during 2015 of the prompt data reconstruction. Scale tests have been run to analyse the performance of this scheduling strategy and ensure an efficient use of the distributed resources. This paper presents the evolution of the CMS job management and resource provisioning systems in order to support this hybrid scheduling model, as well as its deployment and performance tests, which will enable CMS to transition to a multicore production model for the second LHC run.

  7. Evolution of CMS Workload Management Towards Multicore Job Support

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perez-Calero Yzquierdo, A.; Hernández, J. M.; Khan, F. A.

    The successful exploitation of multicore processor architectures is a key element of the LHC distributed computing system in the coming era of the LHC Run 2. High-pileup complex-collision events represent a challenge for the traditional sequential programming in terms of memory and processing time budget. The CMS data production and processing framework is introducing the parallel execution of the reconstruction and simulation algorithms to overcome these limitations. CMS plans to execute multicore jobs while still supporting singlecore processing for other tasks difficult to parallelize, such as user analysis. The CMS strategy for job management thus aims at integrating single andmore » multicore job scheduling across the Grid. This is accomplished by employing multicore pilots with internal dynamic partitioning of the allocated resources, capable of running payloads of various core counts simultaneously. An extensive test programme has been conducted to enable multicore scheduling with the various local batch systems available at CMS sites, with the focus on the Tier-0 and Tier-1s, responsible during 2015 of the prompt data reconstruction. Scale tests have been run to analyse the performance of this scheduling strategy and ensure an efficient use of the distributed resources. This paper presents the evolution of the CMS job management and resource provisioning systems in order to support this hybrid scheduling model, as well as its deployment and performance tests, which will enable CMS to transition to a multicore production model for the second LHC run.« less

  8. The Legnaro-Padova distributed Tier-2: challenges and results

    NASA Astrophysics Data System (ADS)

    Badoer, Simone; Biasotto, Massimo; Costa, Fulvia; Crescente, Alberto; Fantinel, Sergio; Ferrari, Roberto; Gulmini, Michele; Maron, Gaetano; Michelotto, Michele; Sgaravatto, Massimo; Toniolo, Nicola

    2014-06-01

    The Legnaro-Padova Tier-2 is a computing facility serving the ALICE and CMS LHC experiments. It also supports other High Energy Physics experiments and other virtual organizations of different disciplines, which can opportunistically harness idle resources if available. The unique characteristic of this Tier-2 is its topology: the computational resources are spread in two different sites, about 15 km apart: the INFN Legnaro National Laboratories and the INFN Padova unit, connected through a 10 Gbps network link (it will be soon updated to 20 Gbps). Nevertheless these resources are seamlessly integrated and are exposed as a single computing facility. Despite this intrinsic complexity, the Legnaro-Padova Tier-2 ranks among the best Grid sites for what concerns reliability and availability. The Tier-2 comprises about 190 worker nodes, providing about 26000 HS06 in total. Such computing nodes are managed by the LSF local resource management system, and are accessible using a Grid-based interface implemented through multiple CREAM CE front-ends. dCache, xrootd and Lustre are the storage systems in use at the Tier-2: about 1.5 PB of disk space is available to users in total, through multiple access protocols. A 10 Gbps network link, planned to be doubled in the next months, connects the Tier-2 to WAN. This link is used for the LHC Open Network Environment (LHCONE) and for other general purpose traffic. In this paper we discuss about the experiences at the Legnaro-Padova Tier-2: the problems that had to be addressed, the lessons learned, the implementation choices. We also present the tools used for the daily management operations. These include DOCET, a Java-based webtool designed, implemented and maintained at the Legnaro-Padova Tier-2, and deployed also in other sites, such as the LHC Italian T1. DOCET provides an uniform interface to manage all the information about the physical resources of a computing center. It is also used as documentation repository available to the Tier-2 operations team. Finally we discuss about the foreseen developments of the existing infrastructure. This includes in particular the evolution from a Grid-based resource towards a Cloud-based computing facility.

  9. CMS Connect

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B.; Gardner, R., Jr.; Hurtado Anampa, K.; Jayatilaka, B.; Aftab Khan, F.; Lannon, K.; Larson, K.; Letts, J.; Marra Da Silva, J.; Mascheroni, M.; Mason, D.; Perez-Calero Yzquierdo, A.; Tiradani, A.

    2017-10-01

    The CMS experiment collects and analyzes large amounts of data coming from high energy particle collisions produced by the Large Hadron Collider (LHC) at CERN. This involves a huge amount of real and simulated data processing that needs to be handled in batch-oriented platforms. The CMS Global Pool of computing resources provide +100K dedicated CPU cores and another 50K to 100K CPU cores from opportunistic resources for these kind of tasks and even though production and event processing analysis workflows are already managed by existing tools, there is still a lack of support to submit final stage condor-like analysis jobs familiar to Tier-3 or local Computing Facilities users into these distributed resources in an integrated (with other CMS services) and friendly way. CMS Connect is a set of computing tools and services designed to augment existing services in the CMS Physics community focusing on these kind of condor analysis jobs. It is based on the CI-Connect platform developed by the Open Science Grid and uses the CMS GlideInWMS infrastructure to transparently plug CMS global grid resources into a virtual pool accessed via a single submission machine. This paper describes the specific developments and deployment of CMS Connect beyond the CI-Connect platform in order to integrate the service with CMS specific needs, including specific Site submission, accounting of jobs and automated reporting to standard CMS monitoring resources in an effortless way to their users.

  10. Monitoring data transfer latency in CMS computing operations

    DOE PAGES

    Bonacorsi, Daniele; Diotalevi, Tommaso; Magini, Nicolo; ...

    2015-12-23

    During the first LHC run, the CMS experiment collected tens of Petabytes of collision and simulated data, which need to be distributed among dozens of computing centres with low latency in order to make efficient use of the resources. While the desired level of throughput has been successfully achieved, it is still common to observe transfer workflows that cannot reach full completion in a timely manner due to a small fraction of stuck files which require operator intervention.For this reason, in 2012 the CMS transfer management system, PhEDEx, was instrumented with a monitoring system to measure file transfer latencies, andmore » to predict the completion time for the transfer of a data set. The operators can detect abnormal patterns in transfer latencies while the transfer is still in progress, and monitor the long-term performance of the transfer infrastructure to plan the data placement strategy.Based on the data collected for one year with the latency monitoring system, we present a study on the different factors that contribute to transfer completion time. As case studies, we analyze several typical CMS transfer workflows, such as distribution of collision event data from CERN or upload of simulated event data from the Tier-2 centres to the archival Tier-1 centres. For each workflow, we present the typical patterns of transfer latencies that have been identified with the latency monitor.We identify the areas in PhEDEx where a development effort can reduce the latency, and we show how we are able to detect stuck transfers which need operator intervention. Lastly, we propose a set of metrics to alert about stuck subscriptions and prompt for manual intervention, with the aim of improving transfer completion times.« less

  11. Monitoring data transfer latency in CMS computing operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonacorsi, Daniele; Diotalevi, Tommaso; Magini, Nicolo

    During the first LHC run, the CMS experiment collected tens of Petabytes of collision and simulated data, which need to be distributed among dozens of computing centres with low latency in order to make efficient use of the resources. While the desired level of throughput has been successfully achieved, it is still common to observe transfer workflows that cannot reach full completion in a timely manner due to a small fraction of stuck files which require operator intervention.For this reason, in 2012 the CMS transfer management system, PhEDEx, was instrumented with a monitoring system to measure file transfer latencies, andmore » to predict the completion time for the transfer of a data set. The operators can detect abnormal patterns in transfer latencies while the transfer is still in progress, and monitor the long-term performance of the transfer infrastructure to plan the data placement strategy.Based on the data collected for one year with the latency monitoring system, we present a study on the different factors that contribute to transfer completion time. As case studies, we analyze several typical CMS transfer workflows, such as distribution of collision event data from CERN or upload of simulated event data from the Tier-2 centres to the archival Tier-1 centres. For each workflow, we present the typical patterns of transfer latencies that have been identified with the latency monitor.We identify the areas in PhEDEx where a development effort can reduce the latency, and we show how we are able to detect stuck transfers which need operator intervention. Lastly, we propose a set of metrics to alert about stuck subscriptions and prompt for manual intervention, with the aim of improving transfer completion times.« less

  12. CRAB3: Establishing a new generation of services for distributed analysis at CMS

    NASA Astrophysics Data System (ADS)

    Cinquilli, M.; Spiga, D.; Grandi, C.; Hernàndez, J. M.; Konstantinov, P.; Mascheroni, M.; Riahi, H.; Vaandering, E.

    2012-12-01

    In CMS Computing the highest priorities for analysis tools are the improvement of the end users’ ability to produce and publish reliable samples and analysis results as well as a transition to a sustainable development and operations model. To achieve these goals CMS decided to incorporate analysis processing into the same framework as data and simulation processing. This strategy foresees that all workload tools (TierO, Tier1, production, analysis) share a common core with long term maintainability as well as the standardization of the operator interfaces. The re-engineered analysis workload manager, called CRAB3, makes use of newer technologies, such as RESTFul based web services and NoSQL Databases, aiming to increase the scalability and reliability of the system. As opposed to CRAB2, in CRAB3 all work is centrally injected and managed in a global queue. A pool of agents, which can be geographically distributed, consumes work from the central services serving the user tasks. The new architecture of CRAB substantially changes the deployment model and operations activities. In this paper we present the implementation of CRAB3, emphasizing how the new architecture improves the workflow automation and simplifies maintainability. In particular, we will highlight the impact of the new design on daily operations.

  13. Elastic Extension of a CMS Computing Centre Resources on External Clouds

    NASA Astrophysics Data System (ADS)

    Codispoti, G.; Di Maria, R.; Aiftimiei, C.; Bonacorsi, D.; Calligola, P.; Ciaschini, V.; Costantini, A.; Dal Pra, S.; DeGirolamo, D.; Grandi, C.; Michelotto, D.; Panella, M.; Peco, G.; Sapunenko, V.; Sgaravatto, M.; Taneja, S.; Zizzi, G.

    2016-10-01

    After the successful LHC data taking in Run-I and in view of the future runs, the LHC experiments are facing new challenges in the design and operation of the computing facilities. The computing infrastructure for Run-II is dimensioned to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, CMS - along the lines followed by other LHC experiments - is exploring the opportunity to access Cloud resources provided by external partners or commercial providers. Specific use cases have already been explored and successfully exploited during Long Shutdown 1 (LS1) and the first part of Run 2. In this work we present the proof of concept of the elastic extension of a CMS site, specifically the Bologna Tier-3, on an external OpenStack infrastructure. We focus on the “Cloud Bursting” of a CMS Grid site using a newly designed LSF configuration that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on the OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage. The amount of resources allocated thus can be elastically modeled to cope up with the needs of CMS experiment and local users. Moreover, a direct access/integration of OpenStack resources to the CMS workload management system is explored. In this paper we present this approach, we report on the performances of the on-demand allocated resources, and we discuss the lessons learned and the next steps.

  14. Comprehensive monitoring for heterogeneous geographically distributed storage

    DOE PAGES

    Ratnikova, Natalia; Karavakis, E.; Lammel, S.; ...

    2015-12-23

    Storage capacity at CMS Tier-1 and Tier-2 sites reached over 100 Petabytes in 2014, and will be substantially increased during Run 2 data taking. The allocation of storage for the individual users analysis data, which is not accounted as a centrally managed storage space, will be increased to up to 40%. For comprehensive tracking and monitoring of the storage utilization across all participating sites, CMS developed a space monitoring system, which provides a central view of the geographically dispersed heterogeneous storage systems. The first prototype was deployed at pilot sites in summer 2014, and has been substantially reworked since then.more » In this study, we discuss the functionality and our experience of system deployment and operation on the full CMS scale.« less

  15. Opportunistic Resource Usage in CMS

    NASA Astrophysics Data System (ADS)

    Kreuzer, Peter; Hufnagel, Dirk; Dykstra, D.; Gutsche, O.; Tadel, M.; Sfiligoi, I.; Letts, J.; Wuerthwein, F.; McCrea, A.; Bockelman, B.; Fajardo, E.; Linares, L.; Wagner, R.; Konstantinov, P.; Blumenfeld, B.; Bradley, D.; Cms Collaboration

    2014-06-01

    CMS is using a tiered setup of dedicated computing resources provided by sites distributed over the world and organized in WLCG. These sites pledge resources to CMS and are preparing them especially for CMS to run the experiment's applications. But there are more resources available opportunistically both on the GRID and in local university and research clusters which can be used for CMS applications. We will present CMS' strategy to use opportunistic resources and prepare them dynamically to run CMS applications. CMS is able to run its applications on resources that can be reached through the GRID, through EC2 compliant cloud interfaces. Even resources that can be used through ssh login nodes can be harnessed. All of these usage modes are integrated transparently into the GlideIn WMS submission infrastructure, which is the basis of CMS' opportunistic resource usage strategy. Technologies like Parrot to mount the software distribution via CVMFS and xrootd for access to data and simulation samples via the WAN are used and will be described. We will summarize the experience with opportunistic resource usage and give an outlook for the restart of LHC data taking in 2015.

  16. VIPRAM_L1CMS: a 2-Tier 3D Architecture for Pattern Recognition for Track Finding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoff, J. R.; Joshi, Joshi,S.; Liu, Liu,

    In HEP tracking trigger applications, flagging an individual detector hit is not important. Rather, the path of a charged particle through many detector layers is what must be found. Moreover, given the increased luminosity projected for future LHC experiments, this type of track finding will be required within the Level 1 Trigger system. This means that future LHC experiments require not just a chip capable of high-speed track finding but also one with a high-speed readout architecture. VIPRAM_L1CMS is 2-Tier Vertically Integrated chip designed to fulfill these requirements. It is a complete pipelined Pattern Recognition Associative Memory (PRAM) architecture includingmore » pattern recognition, result sparsification, and readout for Level 1 trigger applications in CMS with 15-bit wide detector addresses and eight detector layers included in the track finding. Pattern recognition is based on classic Content Addressable Memories with a Current Race Scheme to reduce timing complexity and a 4-bit Selective Precharge to minimize power consumption. VIPRAM_L1CMS uses a pipelined set of priority-encoded binary readout structures to sparsify and readout active road flags at frequencies of at least 100MHz. VIPRAM_L1CMS is designed to work directly with the Pulsar2b Architecture.« less

  17. User and group storage management the CMS CERN T2 centre

    NASA Astrophysics Data System (ADS)

    Cerminara, G.; Franzoni, G.; Pfeiffer, A.

    2015-12-01

    A wide range of detector commissioning, calibration and data analysis tasks is carried out by CMS using dedicated storage resources available at the CMS CERN Tier-2 centre. Relying on the functionalities of the EOS disk-only storage technology, the optimal exploitation of the CMS user/group resources has required the introduction of policies for data access management, data protection, cleanup campaigns based on access pattern, and long term tape archival. The resource management has been organised around the definition of working groups and the delegation to an identified responsible of each group composition. In this paper we illustrate the user/group storage management, and the development and operational experience at the CMS CERN Tier-2 centre in the 2012-2015 period.

  18. Opportunistic Resource Usage in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kreuzer, Peter; Hufnagel, Dirk; Dykstra, D.

    2014-01-01

    CMS is using a tiered setup of dedicated computing resources provided by sites distributed over the world and organized in WLCG. These sites pledge resources to CMS and are preparing them especially for CMS to run the experiment's applications. But there are more resources available opportunistically both on the GRID and in local university and research clusters which can be used for CMS applications. We will present CMS' strategy to use opportunistic resources and prepare them dynamically to run CMS applications. CMS is able to run its applications on resources that can be reached through the GRID, through EC2 compliantmore » cloud interfaces. Even resources that can be used through ssh login nodes can be harnessed. All of these usage modes are integrated transparently into the GlideIn WMS submission infrastructure, which is the basis of CMS' opportunistic resource usage strategy. Technologies like Parrot to mount the software distribution via CVMFS and xrootd for access to data and simulation samples via the WAN are used and will be described. We will summarize the experience with opportunistic resource usage and give an outlook for the restart of LHC data taking in 2015.« less

  19. An Xrootd Italian Federation

    NASA Astrophysics Data System (ADS)

    Boccali, T.; Donvito, G.; Diacono, D.; Marzulli, G.; Pompili, A.; Della Ricca, G.; Mazzoni, E.; Argiro, S.; Gregori, D.; Grandi, C.; Bonacorsi, D.; Lista, L.; Fabozzi, F.; Barone, L. M.; Santocchia, A.; Riahi, H.; Tricomi, A.; Sgaravatto, M.; Maron, G.

    2014-06-01

    The Italian community in CMS has built a geographically distributed network in which all the data stored in the Italian region are available to all the users for their everyday work. This activity involves at different level all the CMS centers: the Tier1 at CNAF, all the four Tier2s (Bari, Rome, Legnaro and Pisa), and few Tier3s (Trieste, Perugia, Torino, Catania, Napoli, ...). The federation uses the new network connections as provided by GARR, our NREN (National Research and Education Network), which provides a minimum of 10 Gbit/s to all the sites via the GARR-X[2] project. The federation is currently based on Xrootd[1] technology, and on a Redirector aimed to seamlessly connect all the sites, giving the logical view of a single entity. A special configuration has been put in place for the Tier1, CNAF, where ad-hoc Xrootd changes have been implemented in order to protect the tape system from excessive stress, by not allowing WAN connections to access tape only files, on a file-by-file basis. In order to improve the overall performance while reading files, both in terms of bandwidth and latency, a hierarchy of xrootd redirectors has been implemented. The solution implemented provides a dedicated Redirector where all the INFN sites are registered, without considering their status (T1, T2, or T3 sites). An interesting use case were able to cover via the federation are disk-less Tier3s. The caching solution allows to operate a local storage with minimal human intervention: transfers are automatically done on a single file basis, and the cache is maintained operational by automatic removal of old files.

  20. Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility

    NASA Astrophysics Data System (ADS)

    Donvito, Giacinto; Salomoni, Davide; Italiano, Alessandro

    2014-06-01

    In this work the testing activities that were carried on to verify if the SLURM batch system could be used as the production batch system of a typical Tier1/Tier2 HEP computing center are shown. SLURM (Simple Linux Utility for Resource Management) is an Open Source batch system developed mainly by the Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe Bull. Testing was focused both on verifying the functionalities of the batch system and the performance that SLURM is able to offer. We first describe our initial set of requirements. Functionally, we started configuring SLURM so that it replicates all the scheduling policies already used in production in the computing centers involved in the test, i.e. INFN-Bari and the INFN-Tier1 at CNAF, Bologna. Currently, the INFN-Tier1 is using IBM LSF (Load Sharing Facility), while INFN-Bari, an LHC Tier2 for both CMS and Alice, is using Torque as resource manager and MAUI as scheduler. We show how we configured SLURM in order to enable several scheduling functionalities such as Hierarchical FairShare, Quality of Service, user-based and group-based priority, limits on the number of jobs per user/group/queue, job age scheduling, job size scheduling, and scheduling of consumable resources. We then show how different job typologies, like serial, MPI, multi-thread, whole-node and interactive jobs can be managed. Tests on the use of ACLs on queues or in general other resources are then described. A peculiar SLURM feature we also verified is triggers on event, useful to configure specific actions on each possible event in the batch system. We also tested highly available configurations for the master node. This feature is of paramount importance since a mandatory requirement in our scenarios is to have a working farm cluster even in case of hardware failure of the server(s) hosting the batch system. Among our requirements there is also the possibility to deal with pre-execution and post-execution scripts, and controlled handling of the failure of such scripts. This feature is heavily used, for example, at the INFN-Tier1 in order to check the health status of a worker node before execution of each job. Pre- and post-execution scripts are also important to let WNoDeS, the IaaS Cloud solution developed at INFN, use SLURM as its resource manager. WNoDeS has already been supporting the LSF and Torque batch systems for some time; in this work we show the work done so that WNoDeS supports SLURM as well. Finally, we show several performance tests that we carried on to verify SLURM scalability and reliability, detailing scalability tests both in terms of managed nodes and of queued jobs.

  1. Multi-core processing and scheduling performance in CMS

    NASA Astrophysics Data System (ADS)

    Hernández, J. M.; Evans, D.; Foulkes, S.

    2012-12-01

    Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resulting in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.

  2. 26 CFR 31.3221-2 - Rates and computation of employer tax.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...-2 Rates and computation of employer tax. (a) Rates—(1)(i) Tier 1 tax. The Tier 1 employer tax rate... disability insurance, and section 3111(b), relating to hospital insurance. The Tier 1 employer tax rate is... Federal Insurance Contributions Act. (ii) Example. The rule in paragraph (a)(1)(i) of this section is...

  3. 26 CFR 31.3201-2 - Rates and computation of employee tax.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...-2 Rates and computation of employee tax. (a) Rates—(1)(i) Tier 1 tax. The Tier 1 employee tax rate... disability insurance, and section 3101(b), relating to hospital insurance. The Tier 1 employee tax rate is... Federal Insurance Contributions Act. (ii) Example. The rule in paragraph (a)(1)(i) of this section is...

  4. The CMS dataset bookkeeping service

    NASA Astrophysics Data System (ADS)

    Afaq, A.; Dolgert, A.; Guo, Y.; Jones, C.; Kosyakov, S.; Kuznetsov, V.; Lueking, L.; Riley, D.; Sekhri, V.

    2008-07-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.

  5. Multi-core processing and scheduling performance in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, J. M.; Evans, D.; Foulkes, S.

    2012-01-01

    Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resultingmore » in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.« less

  6. How the center for Medicare and Medicaid innovation should test accountable care organizations.

    PubMed

    Shortell, Stephen M; Casalino, Lawrence P; Fisher, Elliott S

    2010-07-01

    The Patient Protection and Affordable Care Act establishes a national voluntary program for accountable care organizations (ACOs) by January 2012 under the auspices of the Centers for Medicare and Medicaid Services (CMS). The act also creates a Center for Medicare and Medicaid Innovation in the CMS. We propose that the CMS allow flexibility and tiers in ACOs based on their specific circumstances, such as the degree to which they are or are not fully integrated systems. Further, we propose that the CMS assume responsibility for ACO provisions and develop an ordered system for learning how to create and sustain ACOs. Key steps would include setting specific performance goals, developing skills and tools that facilitate change, establishing measurement and accountability mechanisms, and supporting leadership development.

  7. Building a Prototype of LHC Analysis Oriented Computing Centers

    NASA Astrophysics Data System (ADS)

    Bagliesi, G.; Boccali, T.; Della Ricca, G.; Donvito, G.; Paganoni, M.

    2012-12-01

    A Consortium between four LHC Computing Centers (Bari, Milano, Pisa and Trieste) has been formed in 2010 to prototype Analysis-oriented facilities for CMS data analysis, profiting from a grant from the Italian Ministry of Research. The Consortium aims to realize an ad-hoc infrastructure to ease the analysis activities on the huge data set collected at the LHC Collider. While “Tier2” Computing Centres, specialized in organized processing tasks like Monte Carlo simulation, are nowadays a well established concept, with years of running experience, site specialized towards end user chaotic analysis activities do not yet have a defacto standard implementation. In our effort, we focus on all the aspects that can make the analysis tasks easier for a physics user not expert in computing. On the storage side, we are experimenting on storage techniques allowing for remote data access and on storage optimization on the typical analysis access patterns. On the networking side, we are studying the differences between flat and tiered LAN architecture, also using virtual partitioning of the same physical networking for the different use patterns. Finally, on the user side, we are developing tools and instruments to allow for an exhaustive monitoring of their processes at the site, and for an efficient support system in case of problems. We will report about the results of the test executed on different subsystem and give a description of the layout of the infrastructure in place at the site participating to the consortium.

  8. The CMS dataset bookkeeping service

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afaq, Anzar,; /Fermilab; Dolgert, Andrew

    2007-10-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS ismore » available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.« less

  9. Named Data Networking in Climate Research and HEP Applications

    NASA Astrophysics Data System (ADS)

    Shannigrahi, Susmit; Papadopoulos, Christos; Yeh, Edmund; Newman, Harvey; Jerzy Barczyk, Artur; Liu, Ran; Sim, Alex; Mughal, Azher; Monga, Inder; Vlimant, Jean-Roch; Wu, John

    2015-12-01

    The Computing Models of the LHC experiments continue to evolve from the simple hierarchical MONARC[2] model towards more agile models where data is exchanged among many Tier2 and Tier3 sites, relying on both large scale file transfers with strategic data placement, and an increased use of remote access to object collections with caching through CMS's AAA, ATLAS' FAX and ALICE's AliEn projects, for example. The challenges presented by expanding needs for CPU, storage and network capacity as well as rapid handling of large datasets of file and object collections have pointed the way towards future more agile pervasive models that make best use of highly distributed heterogeneous resources. In this paper, we explore the use of Named Data Networking (NDN), a new Internet architecture focusing on content rather than the location of the data collections. As NDN has shown considerable promise in another data intensive field, Climate Science, we discuss the similarities and differences between the Climate and HEP use cases, along with specific issues HEP faces and will face during LHC Run2 and beyond, which NDN could address.

  10. 26 CFR 31.3211-2 - Rates and computation of employee representative tax.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...) Rates—(1)(i) Tier 1 tax. The Tier 1 employee representative tax rate equals the sum of the tax rates in... employer tax for hospital insurance. The Tier 1 employee representative tax rate is applied to compensation... Insurance Contributions Act. (ii) Example. The rule in paragraph (a)(1)(i) of this section is illustrated by...

  11. Distributed storage and cloud computing: a test case

    NASA Astrophysics Data System (ADS)

    Piano, S.; Delia Ricca, G.

    2014-06-01

    Since 2003 the computing farm hosted by the INFN Tier3 facility in Trieste supports the activities of many scientific communities. Hundreds of jobs from 45 different VOs, including those of the LHC experiments, are processed simultaneously. Given that normally the requirements of the different computational communities are not synchronized, the probability that at any given time the resources owned by one of the participants are not fully utilized is quite high. A balanced compensation should in principle allocate the free resources to other users, but there are limits to this mechanism. In fact, the Trieste site may not hold the amount of data needed to attract enough analysis jobs, and even in that case there could be a lack of bandwidth for their access. The Trieste ALICE and CMS computing groups, in collaboration with other Italian groups, aim to overcome the limitations of existing solutions using two approaches: sharing the data among all the participants taking full advantage of GARR-X wide area networks (10 GB/s) and integrating the resources dedicated to batch analysis with the ones reserved for dynamic interactive analysis, through modern solutions as cloud computing.

  12. Cost-Benefit Analysis for ECIA Chapter 1 and State DPPF Programs Comparing Groups Receiving Regular Program Instruction and Groups Receiving Computer Assisted Instruction/Computer Management System (CAI/CMS). 1986-87.

    ERIC Educational Resources Information Center

    Chamberlain, Ed

    A cost benefit study was conducted to determine the effectiveness of a computer assisted instruction/computer management system (CAI/CMS) as an alternative to conventional methods of teaching reading within Chapter 1 and DPPF funded programs of the Columbus (Ohio) Public Schools. The Chapter 1 funded Compensatory Language Experiences and Reading…

  13. LHCNet: Wide Area Networking and Collaborative Systems for HEP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, H.B,

    2007-08-20

    This proposal presents the status and progress in 2006-7, and the technical and financial plans for 2008-2010 for the US LHCNet transatlantic network supporting U.S. participation in the LHC physics program. US LHCNet provides transatlantic connections of the Tier1 computing facilities at Fermilab and Brookhaven with the Tier0 and Tier1 facilities at CERN as well as Tier1s elsewhere in Europe and Asia. Together with ESnet, Internet2, the GEANT pan-European network, and NSF’s UltraLight project, US LHCNet also supports connections between the Tier2 centers (where most of the analysis of the data will take place, starting this year) and the Tier1smore » as needed.See report« less

  14. Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond

    NASA Astrophysics Data System (ADS)

    Bonacorsi, D.; Boccali, T.; Giordano, D.; Girone, M.; Neri, M.; Magini, N.; Kuznetsov, V.; Wildish, T.

    2015-12-01

    During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This is of great importance, given that the scale of the computing problem will increase far faster than the resources available to the experiments, for Run-2 and beyond. Studying data-access patterns involves the validation of the quality of the monitoring data collected on the “popularity of each dataset, the analysis of the frequency and pattern of accesses to different datasets by analysis end-users, the exploration of different views of the popularity data (by physics activity, by region, by data type), the study of the evolution of Run-1 data exploitation over time, the evaluation of the impact of different data placement and distribution choices on the available network and storage resources and their impact on the computing operations. This work presents some insights from studies on the popularity data from the CMS experiment. We present the properties of a range of physics analysis activities as seen by the data popularity, and make recommendations for how to tune the initial distribution of data in anticipation of how it will be used in Run-2 and beyond.

  15. NASA's Carbon Monitoring System (CMS) Applications and Application Readiness Levels (ARLs)-An assessment of how all CMS ARLs provide societal benefit.

    NASA Astrophysics Data System (ADS)

    Escobar, V. M.; Sepulveda Carlo, E.; Delgado Arias, S.

    2016-12-01

    During the past six years, the NASA Carbon Monitoring System (CMS) Applications effort has been engaging with stakeholders in an effort to make the 52 CMS project user friendly and policy relevant. Congressionally directed, the CMS initiative is a NASA endeavor providing carbon data products that help characterize and understand carbon sources and sinks at local and international scales. All data are freely available, and scaled for local, state, regional, national and international-level resource management. To facilitate user feedback during development, as well as understanding for the type of use and application the CMS data products can provide, the Applications project utilizes the NASA Applied Sciences Program nine step Application Readiness Level (ARL) indices. These are used to track and manage the progression and distribution of funded projects. ARLs are an adaptation of NASA's technology readiness levels (TRLs) used for managing technology and risk and reflects the three main tiers of a project: research, development and deployment. The ARLs are scaled from 1 to 9, research and development (ARL1) to operational and/or decision making ready products (ARL9). The ARLS can be broken up into three phases: Phase 1, discovery and feasibility (ARL 1-3); Phase 2, development testing and validation (ARL 4-6); and Phase 3, integration into Partner's systems (ARL 7-9). The ARLs are designed to inform both scientist and end user of the product maturity and application capability. The CMS initiative has products that range across all ARLs, providing societal benefit at multiple scales. Lower ARLs contribute to formal documents such as the IPCC while others at higher levels provide decision support quantifying the value of carbon data for greenhouse gas (GHG) reduction planning. Most CMS products have an ARL 5, (validation of a product in a relevant environment), meaning the CMS carbon science is actively in a state of science-user engagement. For the user community, ARLs are a litmus test for knowing the type of user feedback and advocacy that can be implemented into the product design. For the scientist, ARLS help communicate (1) the maturity of their science to users who would like to use it for decision making and (2) the intended use of the product.

  16. Status and Trends in Networking at LHC Tier1 Facilities

    NASA Astrophysics Data System (ADS)

    Bobyshev, A.; DeMar, P.; Grigaliunas, V.; Bigrow, J.; Hoeft, B.; Reymund, A.

    2012-12-01

    The LHC is entering its fourth year of production operation. Most Tier1 facilities have been in operation for almost a decade, when development and ramp-up efforts are included. LHC's distributed computing model is based on the availability of high capacity, high performance network facilities for both the WAN and LAN data movement, particularly within the Tier1 centers. As a result, the Tier1 centers tend to be on the leading edge of data center networking technology. In this paper, we analyze past and current developments in Tier1 LAN networking, as well as extrapolating where we anticipate networking technology is heading. Our analysis will include examination into the following areas: • Evolution of Tier1 centers to their current state • Evolving data center networking models and how they apply to Tier1 centers • Impact of emerging network technologies (e.g. 10GE-connected hosts, 40GE/100GE links, IPv6) on Tier1 centers • Trends in WAN data movement and emergence of software-defined WAN network capabilities • Network virtualization

  17. A distributed Tier-1

    NASA Astrophysics Data System (ADS)

    Fischer, L.; Grønager, M.; Kleist, J.; Smirnova, O.

    2008-07-01

    The Tier-1 facility operated by the Nordic DataGrid Facility (NDGF) differs significantly from other Tier-1s in several aspects: firstly, it is not located at one or a few premises, but instead is distributed throughout the Nordic countries; secondly, it is not under the governance of a single organization but instead is a meta-center built of resources under the control of a number of different national organizations. We present some technical implications of these aspects as well as the high-level design of this distributed Tier-1. The focus will be on computing services, storage and monitoring.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bobyshev, A.; DeMar, P.; Grigaliunas, V.

    The LHC is entering its fourth year of production operation. Most Tier1 facilities have been in operation for almost a decade, when development and ramp-up efforts are included. LHC's distributed computing model is based on the availability of high capacity, high performance network facilities for both the WAN and LAN data movement, particularly within the Tier1 centers. As a result, the Tier1 centers tend to be on the leading edge of data center networking technology. In this paper, we analyze past and current developments in Tier1 LAN networking, as well as extrapolating where we anticipate networking technology is heading. Ourmore » analysis will include examination into the following areas: Evolution of Tier1 centers to their current state Evolving data center networking models and how they apply to Tier1 centers Impact of emerging network technologies (e.g. 10GE-connected hosts, 40GE/100GE links, IPv6) on Tier1 centers Trends in WAN data movement and emergence of software-defined WAN network capabilities Network virtualization« less

  19. Research Activities at Fermilab for Big Data Movement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mhashilkar, Parag; Wu, Wenji; Kim, Hyun W

    2013-01-01

    Adaptation of 100GE Networking Infrastructure is the next step towards management of Big Data. Being the US Tier-1 Center for the Large Hadron Collider's (LHC) Compact Muon Solenoid (CMS) experiment and the central data center for several other large-scale research collaborations, Fermilab has to constantly deal with the scaling and wide-area distribution challenges of the big data. In this paper, we will describe some of the challenges involved in the movement of big data over 100GE infrastructure and the research activities at Fermilab to address these challenges.

  20. 20 CFR 226.33 - Spouse regular annuity rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Spouse regular annuity rate. 226.33 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.33 Spouse regular annuity rate. The final tier I and tier II rates, from §§ 226.30 and 226.32, are...

  1. Commissioning the CERN IT Agile Infrastructure with experiment workloads

    NASA Astrophysics Data System (ADS)

    Medrano Llamas, Ramón; Harald Barreiro Megino, Fernando; Kucharczyk, Katarzyna; Kamil Denis, Marek; Cinquilli, Mattia

    2014-06-01

    In order to ease the management of their infrastructure, most of the WLCG sites are adopting cloud based strategies. In the case of CERN, the Tier 0 of the WLCG, is completely restructuring the resource and configuration management of their computing center under the codename Agile Infrastructure. Its goal is to manage 15,000 Virtual Machines by means of an OpenStack middleware in order to unify all the resources in CERN's two datacenters: the one placed in Meyrin and the new on in Wigner, Hungary. During the commissioning of this infrastructure, CERN IT is offering an attractive amount of computing resources to the experiments (800 cores for ATLAS and CMS) through a private cloud interface. ATLAS and CMS have joined forces to exploit them by running stress tests and simulation workloads since November 2012. This work will describe the experience of the first deployments of the current experiment workloads on the CERN private cloud testbed. The paper is organized as follows: the first section will explain the integration of the experiment workload management systems (WMS) with the cloud resources. The second section will revisit the performance and stress testing performed with HammerCloud in order to evaluate and compare the suitability for the experiment workloads. The third section will go deeper into the dynamic provisioning techniques, such as the use of the cloud APIs directly by the WMS. The paper finishes with a review of the conclusions and the challenges ahead.

  2. 20 CFR 225.21 - Survivor Tier I PIA.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... INSURANCE AMOUNT DETERMINATIONS PIA's Used in Computing Survivor Annuities and the Amount of the Residual Lump-Sum Payable § 225.21 Survivor Tier I PIA. The Survivor Tier I PIA is used in computing the tier I... Security Act using the deceased employee's combined railroad and social security earnings after 1950 (or...

  3. 26 CFR 1.902-2 - Treatment of deficits in post-1986 undistributed earnings and pre-1987 accumulated profits of a...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... earnings and pre-1987 accumulated profits of a first- or lower-tier corporation for purposes of computing... earnings and pre-1987 accumulated profits of a first- or lower-tier corporation for purposes of computing... would be a dividend if there were current or accumulated earnings and profits, then the post-1986...

  4. Cedar Middle School's Response to Intervention Journey: A Systematic, Multi-Tier, Problem-Solving Approach to Program Implementation

    ERIC Educational Resources Information Center

    Dulaney, Shannon Kay

    2010-01-01

    The purpose of the present study was to record Cedar Middle School's (CMS) response to intervention implementation journey. It is a qualitative case study that examines one school's efforts to bring school improvements under the response to inventory (RtI) umbrella in order to achieve a more systematic approach to providing high-quality…

  5. Patch-Clamp Recording from Human Induced Pluripotent Stem Cell-Derived Cardiomyocytes: Improving Action Potential Characteristics through Dynamic Clamp

    PubMed Central

    Veerman, Christiaan C.; Zegers, Jan G.; Mengarelli, Isabella; Bezzina, Connie R.

    2017-01-01

    Human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) hold great promise for studying inherited cardiac arrhythmias and developing drug therapies to treat such arrhythmias. Unfortunately, until now, action potential (AP) measurements in hiPSC-CMs have been hampered by the virtual absence of the inward rectifier potassium current (IK1) in hiPSC-CMs, resulting in spontaneous activity and altered function of various depolarising and repolarising membrane currents. We assessed whether AP measurements in “ventricular-like” and “atrial-like” hiPSC-CMs could be improved through a simple, highly reproducible dynamic clamp approach to provide these cells with a substantial IK1 (computed in real time according to the actual membrane potential and injected through the patch-clamp pipette). APs were measured at 1 Hz using perforated patch-clamp methodology, both in control cells and in cells treated with all-trans retinoic acid (RA) during the differentiation process to increase the number of cells with atrial-like APs. RA-treated hiPSC-CMs displayed shorter APs than control hiPSC-CMs and this phenotype became more prominent upon addition of synthetic IK1 through dynamic clamp. Furthermore, the variability of several AP parameters decreased upon IK1 injection. Computer simulations with models of ventricular-like and atrial-like hiPSC-CMs demonstrated the importance of selecting an appropriate synthetic IK1. In conclusion, the dynamic clamp-based approach of IK1 injection has broad applicability for detailed AP measurements in hiPSC-CMs. PMID:28867785

  6. 20 CFR 228.10 - Computation of the tier I annuity component for a widow(er), disabled widow(er), remarried widow...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computation of the tier I annuity component for a widow(er), disabled widow(er), remarried widow(er), and a surviving divorced spouse. 228.10... component for a widow(er), disabled widow(er), remarried widow(er), and a surviving divorced spouse. The...

  7. 20 CFR 228.10 - Computation of the tier I annuity component for a widow(er), disabled widow(er), remarried widow...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Computation of the tier I annuity component for a widow(er), disabled widow(er), remarried widow(er), and a surviving divorced spouse. 228.10... component for a widow(er), disabled widow(er), remarried widow(er), and a surviving divorced spouse. The...

  8. 20 CFR 228.10 - Computation of the tier I annuity component for a widow(er), disabled widow(er), remarried widow...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Computation of the tier I annuity component for a widow(er), disabled widow(er), remarried widow(er), and a surviving divorced spouse. 228.10... component for a widow(er), disabled widow(er), remarried widow(er), and a surviving divorced spouse. The...

  9. 20 CFR 228.10 - Computation of the tier I annuity component for a widow(er), disabled widow(er), remarried widow...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Computation of the tier I annuity component for a widow(er), disabled widow(er), remarried widow(er), and a surviving divorced spouse. 228.10... component for a widow(er), disabled widow(er), remarried widow(er), and a surviving divorced spouse. The...

  10. 20 CFR 228.10 - Computation of the tier I annuity component for a widow(er), disabled widow(er), remarried widow...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Computation of the tier I annuity component for a widow(er), disabled widow(er), remarried widow(er), and a surviving divorced spouse. 228.10... component for a widow(er), disabled widow(er), remarried widow(er), and a surviving divorced spouse. The...

  11. Enabling opportunistic resources for CMS Computing Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hufnagel, Dirk

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enablemore » access and bring the CMS environment into these non CMS resources. Finally, we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less

  12. Enabling opportunistic resources for CMS Computing Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hufnagel, Dick

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resources — resources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are usedmore » to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less

  13. Enabling opportunistic resources for CMS Computing Operations

    DOE PAGES

    Hufnagel, Dirk

    2015-12-23

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enablemore » access and bring the CMS environment into these non CMS resources. Finally, we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less

  14. The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi

    The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less

  15. The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC

    DOE PAGES

    Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi

    2018-03-19

    The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less

  16. US LHCNet: Transatlantic Networking for the LHC and the U.S. HEP Community

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Harvey B; Barczyk, Artur J

    2013-04-05

    US LHCNet provides the transatlantic connectivity between the Tier1 computing facilities at the Fermilab and Brookhaven National Labs and the Tier0 and Tier1 facilities at CERN, as well as Tier1s elsewhere in Europe and Asia. Together with ESnet, Internet2, and other R&E Networks participating in the LHCONE initiative, US LHCNet also supports transatlantic connections between the Tier2 centers (where most of the data analysis is taking place) and the Tier1s as needed. Given the key roles of the US and European Tier1 centers as well as Tier2 centers on both continents, the largest data flows are across the Atlantic, wheremore » US LHCNet has the major role. US LHCNet manages and operates the transatlantic network infrastructure including four Points of Presence (PoPs) and currently six transatlantic OC-192 (10Gbps) leased links. Operating at the optical layer, the network provides a highly resilient fabric for data movement, with a target service availability level in excess of 99.95%. This level of resilience and seamless operation is achieved through careful design including path diversity on both submarine and terrestrial segments, use of carrier-grade equipment with built-in high-availability and redundancy features, deployment of robust failover mechanisms based on SONET protection schemes, as well as the design of facility-diverse paths between the LHC computing sites. The US LHCNet network provides services at Layer 1(optical), Layer 2 (Ethernet) and Layer 3 (IPv4 and IPv6). The flexible design of the network, including modular equipment, a talented and agile team, and flexible circuit lease management, allows US LHCNet to react quickly to changing requirements form the LHC community. Network capacity is provisioned just-in-time to meet the needs, as demonstrated in the past years during the changing LHC start-up plans.« less

  17. Integration of Russian Tier-1 Grid Center with High Performance Computers at NRC-KI for LHC experiments and beyond HENP

    NASA Astrophysics Data System (ADS)

    Belyaev, A.; Berezhnaya, A.; Betev, L.; Buncic, P.; De, K.; Drizhuk, D.; Klimentov, A.; Lazin, Y.; Lyalin, I.; Mashinistov, R.; Novikov, A.; Oleynik, D.; Polyakov, A.; Poyda, A.; Ryabinkin, E.; Teslyuk, A.; Tkachenko, I.; Yasnopolskiy, L.

    2015-12-01

    The LHC experiments are preparing for the precision measurements and further discoveries that will be made possible by higher LHC energies from April 2015 (LHC Run2). The need for simulation, data processing and analysis would overwhelm the expected capacity of grid infrastructure computing facilities deployed by the Worldwide LHC Computing Grid (WLCG). To meet this challenge the integration of the opportunistic resources into LHC computing model is highly important. The Tier-1 facility at Kurchatov Institute (NRC-KI) in Moscow is a part of WLCG and it will process, simulate and store up to 10% of total data obtained from ALICE, ATLAS and LHCb experiments. In addition Kurchatov Institute has supercomputers with peak performance 0.12 PFLOPS. The delegation of even a fraction of supercomputing resources to the LHC Computing will notably increase total capacity. In 2014 the development a portal combining a Tier-1 and a supercomputer in Kurchatov Institute was started to provide common interfaces and storage. The portal will be used not only for HENP experiments, but also by other data- and compute-intensive sciences like biology with genome sequencing analysis; astrophysics with cosmic rays analysis, antimatter and dark matter search, etc.

  18. A four-tier classification system of pulmonary artery metrics on computed tomography for the diagnosis and prognosis of pulmonary hypertension.

    PubMed

    Truong, Quynh A; Bhatia, Harpreet Singh; Szymonifka, Jackie; Zhou, Qing; Lavender, Zachary; Waxman, Aaron B; Semigran, Marc J; Malhotra, Rajeev

    We aimed to develop a severity classification system of the main pulmonary artery diameter (mPA) and its ratio to the ascending aorta diameter (ratio PA) for the diagnosis and prognosis of pulmonary hypertension (PH) on computed tomography (CT) scans. In 228 patients (136 with PH) undergoing right heart catheterization (RHC) and CT for dyspnea, we measured mPA and ratio PA. In a derivation cohort (n = 114), we determined cutpoints for a four-tier severity grading system that would maximize sensitivity and specificity, and validated it in a separate cohort (n = 114). Cutpoints for mPA were defined with ≤27 mm(F) and ≤29 mm(M) as the normal reference range; mild as >27 to <31 mm(F) and >29 to <31 mm(M); moderate≥31-34 mm; and severe>34 mm. Cutpoints for ratio PA were defined as normal ≤0.9; mild>0.9 to 1.0; moderate>1.0 to 1.1; and severe>1.1. Sensitivities for normal tier were 99% for mPA and 93% for ratio PA; while specificities for severe tier were 98% for mPA>34 mm and 100% for ratio PA>1.1. C-statistics for four-tier mPA and ratio PA were both 0.90 (derivation) and both 0.85 (validation). Severity of mPA and ratio PA corresponded to hemodynamics by RHC and echocardiography (both p < 0.001). Moderate-severe mPA values of ≥31 mm and ratio PA>1.1 had worse survival than normal values (all p ≤ 0.01). A CT-based four-tier severity classification system of PA diameter and its ratio to the aortic diameter has high accuracy for PH diagnosis with increased mortality in patients with moderate-severe severity grades. These results may support clinical utilization on chest and cardiac CT reports. Copyright © 2018 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  19. Optimising LAN access to grid enabled storage elements

    NASA Astrophysics Data System (ADS)

    Stewart, G. A.; Cowan, G. A.; Dunne, B.; Elwell, A.; Millar, A. P.

    2008-07-01

    When operational, the Large Hadron Collider experiments at CERN will collect tens of petabytes of physics data per year. The worldwide LHC computing grid (WLCG) will distribute this data to over two hundred Tier-1 and Tier-2 computing centres, enabling particle physicists around the globe to access the data for analysis. Although different middleware solutions exist for effective management of storage systems at collaborating institutes, the patterns of access envisaged for Tier-2s fall into two distinct categories. The first involves bulk transfer of data between different Grid storage elements using protocols such as GridFTP. This data movement will principally involve writing ESD and AOD files into Tier-2 storage. Secondly, once datasets are stored at a Tier-2, physics analysis jobs will read the data from the local SE. Such jobs require a POSIX-like interface to the storage so that individual physics events can be extracted. In this paper we consider the performance of POSIX-like access to files held in Disk Pool Manager (DPM) storage elements, a popular lightweight SRM storage manager from EGEE.

  20. BNL ATLAS Grid Computing

    ScienceCinema

    Michael Ernst

    2017-12-09

    As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,

  1. Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    NASA Astrophysics Data System (ADS)

    Evans, D.; Fisk, I.; Holzman, B.; Melo, A.; Metson, S.; Pordes, R.; Sheldon, P.; Tiradani, A.

    2011-12-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely "on-demand" as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the "base-line" needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.

  2. Site in a box: Improving the Tier 3 experience

    NASA Astrophysics Data System (ADS)

    Dost, J. M.; Fajardo, E. M.; Jones, T. R.; Martin, T.; Tadel, A.; Tadel, M.; Würthwein, F.

    2017-10-01

    The Pacific Research Platform is an initiative to interconnect Science DMZs between campuses across the West Coast of the United States over a 100 gbps network. The LHC @ UC is a proof of concept pilot project that focuses on interconnecting 6 University of California campuses. It is spearheaded by computing specialists from the UCSD Tier 2 Center in collaboration with the San Diego Supercomputer Center. A machine has been shipped to each campus extending the concept of the Data Transfer Node to a cluster in a box that is fully integrated into the local compute, storage, and networking infrastructure. The node contains a full HTCondor batch system, and also an XRootD proxy cache. User jobs routed to the DTN can run on 40 additional slots provided by the machine, and can also flock to a common GlideinWMS pilot pool, which sends jobs out to any of the participating UCs, as well as to Comet, the new supercomputer at SDSC. In addition, a common XRootD federation has been created to interconnect the UCs and give the ability to arbitrarily export data from the home university, to make it available wherever the jobs run. The UC level federation also statically redirects to either the ATLAS FAX or CMS AAA federation respectively to make globally published datasets available, depending on end user VO membership credentials. XRootD read operations from the federation transfer through the nearest DTN proxy cache located at the site where the jobs run. This reduces wide area network overhead for subsequent accesses, and improves overall read performance. Details on the technical implementation, challenges faced and overcome in setting up the infrastructure, and an analysis of usage patterns and system scalability will be presented.

  3. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    NASA Astrophysics Data System (ADS)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  4. Bringing the CMS distributed computing system into scalable operations

    NASA Astrophysics Data System (ADS)

    Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.

    2010-04-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.

  5. 20 CFR 228.19 - Reduction for a social security benefit.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Reduction for a social security benefit. 228... RETIREMENT ACT COMPUTATION OF SURVIVOR ANNUITIES The Tier I Annuity Component § 228.19 Reduction for a social security benefit. The tier I annuity component is reduced for the amount of any social security benefit to...

  6. 20 CFR 228.19 - Reduction for a social security benefit.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Reduction for a social security benefit. 228... RETIREMENT ACT COMPUTATION OF SURVIVOR ANNUITIES The Tier I Annuity Component § 228.19 Reduction for a social security benefit. The tier I annuity component is reduced for the amount of any social security benefit to...

  7. 20 CFR 228.19 - Reduction for a social security benefit.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Reduction for a social security benefit. 228... RETIREMENT ACT COMPUTATION OF SURVIVOR ANNUITIES The Tier I Annuity Component § 228.19 Reduction for a social security benefit. The tier I annuity component is reduced for the amount of any social security benefit to...

  8. 20 CFR 228.19 - Reduction for a social security benefit.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Reduction for a social security benefit. 228... RETIREMENT ACT COMPUTATION OF SURVIVOR ANNUITIES The Tier I Annuity Component § 228.19 Reduction for a social security benefit. The tier I annuity component is reduced for the amount of any social security benefit to...

  9. 20 CFR 228.19 - Reduction for a social security benefit.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Reduction for a social security benefit. 228... RETIREMENT ACT COMPUTATION OF SURVIVOR ANNUITIES The Tier I Annuity Component § 228.19 Reduction for a social security benefit. The tier I annuity component is reduced for the amount of any social security benefit to...

  10. CMS Distributed Computing Integration in the LHC sustained operations era

    NASA Astrophysics Data System (ADS)

    Grandi, C.; Bockelman, B.; Bonacorsi, D.; Fisk, I.; González Caballero, I.; Farina, F.; Hernández, J. M.; Padhi, S.; Sarkar, S.; Sciabà, A.; Sfiligoi, I.; Spiga, F.; Úbeda García, M.; Van Der Ster, D. C.; Zvada, M.

    2011-12-01

    After many years of preparation the CMS computing system has reached a situation where stability in operations limits the possibility to introduce innovative features. Nevertheless it is the same need of stability and smooth operations that requires the introduction of features that were considered not strategic in the previous phases. Examples are: adequate authorization to control and prioritize the access to storage and computing resources; improved monitoring to investigate problems and identify bottlenecks on the infrastructure; increased automation to reduce the manpower needed for operations; effective process to deploy in production new releases of the software tools. We present the work of the CMS Distributed Computing Integration Activity that is responsible for providing a liaison between the CMS distributed computing infrastructure and the software providers, both internal and external to CMS. In particular we describe the introduction of new middleware features during the last 18 months as well as the requirements to Grid and Cloud software developers for the future.

  11. Development and Evaluation of the Diagnostic Power for a Computer-Based Two-Tier Assessment

    ERIC Educational Resources Information Center

    Lin, Jing-Wen

    2016-01-01

    This study adopted a quasi-experimental design with follow-up interview to develop a computer-based two-tier assessment (CBA) regarding the science topic of electric circuits and to evaluate the diagnostic power of the assessment. Three assessment formats (i.e., paper-and-pencil, static computer-based, and dynamic computer-based tests) using…

  12. 20 CFR 225.10 - General.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... DETERMINATIONS PIA's Used in Computing Employee, Spouse and Divorced Spouse Annuities § 225.10 General. This subpart contains information about the PIA's that can be used in computing most employee, spouse and divorced spouse annuities. The Tier I PIA is used in computing the tier I component of an employee, spouse...

  13. Monitoring techniques and alarm procedures for CMS services and sites in WLCG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molina-Perez, J.; Bonacorsi, D.; Gutsche, O.

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS, the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operatingmore » worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.« less

  14. UPR/Mayaguez High Energy Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendez, Hector

    This year the University of Puerto Rico at Mayaguez (UPRM) High Energy Physics (HEP) group continued with the ongoing research program outlined in the grant proposal. The program is centered on the Compact Muon Solenoid (CMS) experiment at the proton-proton (pp) collisions at the Large Hadron Collider (LHC) at CERN in Geneva, Switzerland. The main research focus is on data analysis and on the preparation for the High Luminosity (HL) LHC or experiment detector upgrade. The physics data analysis included Higgs Doublet Search and measurement of the (1) Λ 0 b branching fraction, (2) B meson mass, and (3) hyperonmore » θ - b lifetime. The detector upgrade included work on the preparations for the Forward Pixel (FPIX) detector Silicon Sensor Testing in a production run at Fermilab. In addition, the group has taken responsibilities on the Software Release through our former research associate Dr. Eric Brownson who acted until last December as a Level Two Offline Manager for the CMS Upgrade. In support of the CMS data analysis activities carried out locally, the UPRM group has built and maintains an excellent Tier3 analysis center in Mayaguez. This allowed us to analyze large data samples and to continue the development of algorithms for the upgrade tracking robustness we started several years ago, and we plan to resume in the near future. This project involves computer simulation of the radiation damage to be suffered at the higher luminosities of the upgraded LHC. This year we continued to serve as a source of outstanding students for the field of high energy physics. Three of our graduate students finished their MS work in May, 2014, Their theses research were on data analysis of heavy quark b-physics. All of them are currently enrolled at Ph.D. physics program across the nation. One of them (Hector Moreno) at New Mexico University (Hector Moreno), one at University of New Hampshire (Sandra Santiesteban) and one at University of Puerto Rico-Rio Piedras (Carlos Malca). The students H. Moreno and C. Malca has been directly supervised by Dr. Mendez and S. Santiesteban supervised by Dr. Ramirez. During the last 13 years, our group have graduated 23 MS students on experimental High Energy Physics data analysis and applied hardware techniques. Most of the students have been supported by DOE grants, included this grant. Since 2001, Dr. Mendez have directly supervised eleven students, Dr. Ramirez three students and the former PI (Dr. Lopez) nine students. These theses work are fully documented in the group web page (http://charma.uprm.edu). The High Energy Physics group at Mayaguez is small and presently consists of three Physics faculty members, the Senior Investigators Dr. Hector Mendez (Professor) and Dr. Juan Eduardo Ramirez (Professor), and Dr. Sudhir Malik who was just hired in July 2014. Dr. Ramirez is in charge of the UPRM Tier-3 computing and will be building the network bandwidth infrastructure for the campus, while Dr. Mendez will continues his effort in finishing the heavy quark physics data analysis and moving to work on SUSY analysis for the 2015 data. Our last grant application in 2012 was awarded only for 2013-2014. As a result our postdoc position was lost last month of March. Since then, we have hired Dr. Malik as a new faculty in order to reinforce the group and to continue our efforts with the CMS experiment. Our plan is to hire another junior faculty in the next two years to strengthen the HEP group even further. Dr. Mendez continues with QuarkNet activities involving an ever larger group of high school physics teachers from all around Puerto Rico.« less

  15. 75 FR 30839 - Privacy Act of 1974; CMS Computer Match No. 2010-03, HHS Computer Match No. 1003, SSA Computer...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-02

    ... of the Matching Program A. General The Computer Matching and Privacy Protection Act of 1988 (Pub. L.... 100-503, the Computer Matching and Privacy Protection Act (CMPPA) of 1988), the Office of Management... 1974; CMS Computer Match No. 2010-03, HHS Computer Match No. 1003, SSA Computer Match No. 1048, IRS...

  16. 26 CFR 1.902-2 - Treatment of deficits in post-1986 undistributed earnings and pre-1987 accumulated profits of a...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... earnings and pre-1987 accumulated profits of a first- or lower-tier corporation for purposes of computing... undistributed earnings and pre-1987 accumulated profits of a first- or lower-tier corporation for purposes of... would be a dividend if there were current or accumulated earnings and profits, then the post-1986...

  17. 78 FR 73195 - Privacy Act of 1974: CMS Computer Matching Program Match No. 2013-01; HHS Computer Matching...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-05

    .... Description of the Matching Program A. General The Computer Matching and Privacy Protection Act of 1988 (Pub... 1974: CMS Computer Matching Program Match No. 2013-01; HHS Computer Matching Program Match No. 1312...). ACTION: Notice of Computer Matching Program (CMP). SUMMARY: In accordance with the requirements of the...

  18. Software Description for the O’Hare Runway Configuration Management System. Volume I. Technical Description,

    DTIC Science & Technology

    1982-10-01

    spent in preparing this document. 00. EXECUTIVE SUMMARY The O’Hare Runway Configuration Management System (CMS) is an interactive multi-user computer ...MITRE Washington’s Computer Center. Currently, CMS is housed in an IBM 4341 computer with VM/SP operating system. CMS employs the IBM’s Display...iV 0O, o 0 .r4L /~ wA 0U 00 00 0 w vi O’Hare, it will operate on a dedicated mini- computer which permits multi-tasking (that is, multiple users

  19. Implementation of NASTRAN on the IBM/370 CMS operating system

    NASA Technical Reports Server (NTRS)

    Britten, S. S.; Schumacker, B.

    1980-01-01

    The NASA Structural Analysis (NASTRAN) computer program is operational on the IBM 360/370 series computers. While execution of NASTRAN has been described and implemented under the virtual storage operating systems of the IBM 370 models, the IBM 370/168 computer can also operate in a time-sharing mode under the virtual machine operating system using the Conversational Monitor System (CMS) subset. The changes required to make NASTRAN operational under the CMS operating system are described.

  20. How to Compute a Slot Marker - Calculation of Controller Managed Spacing Tools for Efficient Descents with Precision Scheduling

    NASA Technical Reports Server (NTRS)

    Prevot, Thomas

    2012-01-01

    This paper describes the underlying principles and algorithms for computing the primary controller managed spacing (CMS) tools developed at NASA for precisely spacing aircraft along efficient descent paths. The trajectory-based CMS tools include slot markers, delay indications and speed advisories. These tools are one of three core NASA technologies integrated in NASAs ATM technology demonstration-1 (ATD-1) that will operationally demonstrate the feasibility of fuel-efficient, high throughput arrival operations using Automatic Dependent Surveillance Broadcast (ADS-B) and ground-based and airborne NASA technologies for precision scheduling and spacing.

  1. 42 CFR 414.68 - Imaging accreditation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...) Computed tomography. (iii) Nuclear medicine. (iv) Positron emission tomography. CMS-approved accreditation... if CMS takes an adverse action based on accreditation findings. (vi) Notify CMS, in writing... organization must permit its surveyors to serve as witnesses if CMS takes an adverse action based on...

  2. 42 CFR 414.68 - Imaging accreditation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...) Computed tomography. (iii) Nuclear medicine. (iv) Positron emission tomography. CMS-approved accreditation... if CMS takes an adverse action based on accreditation findings. (vi) Notify CMS, in writing... organization must permit its surveyors to serve as witnesses if CMS takes an adverse action based on...

  3. 42 CFR 414.68 - Imaging accreditation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) Computed tomography. (iii) Nuclear medicine. (iv) Positron emission tomography. CMS-approved accreditation... if CMS takes an adverse action based on accreditation findings. (vi) Notify CMS, in writing... organization must permit its surveyors to serve as witnesses if CMS takes an adverse action based on...

  4. Screening Chemicals for Estrogen Receptor Bioactivity Using a Computational Model.

    PubMed

    Browne, Patience; Judson, Richard S; Casey, Warren M; Kleinstreuer, Nicole C; Thomas, Russell S

    2015-07-21

    The U.S. Environmental Protection Agency (EPA) is considering high-throughput and computational methods to evaluate the endocrine bioactivity of environmental chemicals. Here we describe a multistep, performance-based validation of new methods and demonstrate that these new tools are sufficiently robust to be used in the Endocrine Disruptor Screening Program (EDSP). Results from 18 estrogen receptor (ER) ToxCast high-throughput screening assays were integrated into a computational model that can discriminate bioactivity from assay-specific interference and cytotoxicity. Model scores range from 0 (no activity) to 1 (bioactivity of 17β-estradiol). ToxCast ER model performance was evaluated for reference chemicals, as well as results of EDSP Tier 1 screening assays in current practice. The ToxCast ER model accuracy was 86% to 93% when compared to reference chemicals and predicted results of EDSP Tier 1 guideline and other uterotrophic studies with 84% to 100% accuracy. The performance of high-throughput assays and ToxCast ER model predictions demonstrates that these methods correctly identify active and inactive reference chemicals, provide a measure of relative ER bioactivity, and rapidly identify chemicals with potential endocrine bioactivities for additional screening and testing. EPA is accepting ToxCast ER model data for 1812 chemicals as alternatives for EDSP Tier 1 ER binding, ER transactivation, and uterotrophic assays.

  5. [Personal computer-based computer monitoring system of the anesthesiologist (2-year experience in development and use)].

    PubMed

    Buniatian, A A; Sablin, I N; Flerov, E V; Mierbekov, E M; Broĭtman, O G; Shevchenko, V V; Shitikov, I I

    1995-01-01

    Creation of computer monitoring systems (CMS) for operating rooms is one of the most important spheres of personal computer employment in anesthesiology. The authors developed a PC RS/AT-based CMS and effectively used it for more than 2 years. This system permits comprehensive monitoring in cardiosurgical operations by real time processing the values of arterial and central venous pressure, pressure in the pulmonary artery, bioelectrical activity of the brain, and two temperature values. Use of this CMS helped appreciably improve patients' safety during surgery. The possibility to assess brain function by computer monitoring the EEF simultaneously with central hemodynamics and body temperature permit the anesthesiologist to objectively assess the depth of anesthesia and to diagnose cerebral hypoxia. Automated anesthesiological chart issued by the CMS after surgery reliably reflects the patient's status and the measures taken by the anesthesiologist.

  6. SiteDB: Marshalling people and resources available to CMS

    NASA Astrophysics Data System (ADS)

    Metson, S.; Bonacorsi, D.; Dias Ferreira, M.; Egeland, R.

    2010-04-01

    In a collaboration the size of CMS (approx. 3000 users, and almost 100 computing centres of varying size) communication and accurate information about the sites it has access to is vital in co-ordinating the multitude of computing tasks required for smooth running. SiteDB is a tool developed by CMS to track sites available to the collaboration, the allocation to CMS of resources available at those sites and the associations between CMS members and the sites (as either a manager/operator of the site or a member of a group associated to the site). It is used to track the roles a person has for an associated site or group. SiteDB eases the coordination load for the operations teams by providing a consistent interface to manage communication with the people working at a site, by identifying who is responsible for a given task or service at a site and by offering a uniform interface to information on CMS contacts and sites. SiteDB provides api's and reports for other CMS tools to use to access the information it contains, for instance enabling CRAB to use "user friendly" names when black/white listing CE's, providing role based authentication and authorisation for other web based services and populating various troubleshooting squads in external ticketing systems in use daily by CMS Computing operations.

  7. CMS Centres Worldwide - a New Collaborative Infrastructure

    NASA Astrophysics Data System (ADS)

    Taylor, Lucas

    2011-12-01

    The CMS Experiment at the LHC has established a network of more than fifty inter-connected "CMS Centres" at CERN and in institutes in the Americas, Asia, Australasia, and Europe. These facilities are used by people doing CMS detector and computing grid operations, remote shifts, data quality monitoring and analysis, as well as education and outreach. We present the computing, software, and collaborative tools and videoconferencing systems. These include permanently running "telepresence" video links (hardware-based H.323, EVO and Vidyo), Webcasts, and generic Web tools such as CMS-TV for broadcasting live monitoring and outreach information. Being Web-based and experiment-independent, these systems could easily be extended to other organizations. We describe the experiences of using CMS Centres Worldwide in the CMS data-taking operations as well as for major media events with several hundred TV channels, radio stations, and many more press journalists simultaneously around the world.

  8. Muons in the CMS High Level Trigger System

    NASA Astrophysics Data System (ADS)

    Verwilligen, Piet; CMS Collaboration

    2016-04-01

    The trigger systems of LHC detectors play a fundamental role in defining the physics capabilities of the experiments. A reduction of several orders of magnitude in the rate of collected events, with respect to the proton-proton bunch crossing rate generated by the LHC, is mandatory to cope with the limits imposed by the readout and storage system. An accurate and efficient online selection mechanism is thus required to fulfill the task keeping maximal the acceptance to physics signals. The CMS experiment operates using a two-level trigger system. Firstly a Level-1 Trigger (L1T) system, implemented using custom-designed electronics, is designed to reduce the event rate to a limit compatible to the CMS Data Acquisition (DAQ) capabilities. A High Level Trigger System (HLT) follows, aimed at further reducing the rate of collected events finally stored for analysis purposes. The latter consists of a streamlined version of the CMS offline reconstruction software and operates on a computer farm. It runs algorithms optimized to make a trade-off between computational complexity, rate reduction and high selection efficiency. With the computing power available in 2012 the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. An efficient selection of muons at HLT, as well as an accurate measurement of their properties, such as transverse momentum and isolation, is fundamental for the CMS physics programme. The performance of the muon HLT for single and double muon triggers achieved in Run I will be presented. Results from new developments, aimed at improving the performance of the algorithms for the harsher scenarios of collisions per event (pile-up) and luminosity expected for Run II will also be discussed.

  9. The Cloud Area Padovana: from pilot to production

    NASA Astrophysics Data System (ADS)

    Andreetto, P.; Costa, F.; Crescente, A.; Dorigo, A.; Fantinel, S.; Fanzago, F.; Sgaravatto, M.; Traldi, S.; Verlato, M.; Zangrando, L.

    2017-10-01

    The Cloud Area Padovana has been running for almost two years. This is an OpenStack-based scientific cloud, spread across two different sites: the INFN Padova Unit and the INFN Legnaro National Labs. The hardware resources have been scaled horizontally and vertically, by upgrading some hypervisors and by adding new ones: currently it provides about 1100 cores. Some in-house developments were also integrated in the OpenStack dashboard, such as a tool for user and project registrations with direct support for the INFN-AAI Identity Provider as a new option for the user authentication. In collaboration with the EU-funded Indigo DataCloud project, the integration with Docker-based containers has been experimented with and will be available in production soon. This computing facility now satisfies the computational and storage demands of more than 70 users affiliated with about 20 research projects. We present here the architecture of this Cloud infrastructure, the tools and procedures used to operate it. We also focus on the lessons learnt in these two years, describing the problems that were found and the corrective actions that had to be applied. We also discuss about the chosen strategy for upgrades, which combines the need to promptly integrate the OpenStack new developments, the demand to reduce the downtimes of the infrastructure, and the need to limit the effort requested for such updates. We also discuss how this Cloud infrastructure is being used. In particular we focus on two big physics experiments which are intensively exploiting this computing facility: CMS and SPES. CMS deployed on the cloud a complex computational infrastructure, composed of several user interfaces for job submission in the Grid environment/local batch queues or for interactive processes; this is fully integrated with the local Tier-2 facility. To avoid a static allocation of the resources, an elastic cluster, based on cernVM, has been configured: it allows to automatically create and delete virtual machines according to the user needs. SPES, using a client-server system called TraceWin, exploits INFN’s virtual resources performing a very large number of simulations on about a thousand nodes elastically managed.

  10. Deployment and Operational Experiences with CernVM-FS at the GridKa Tier-1 Center

    NASA Astrophysics Data System (ADS)

    Alef, Manfred; Jäger, Axel; Petzold and, Andreas; Verstege, Bernhard

    2012-12-01

    In 2012 the GridKa Tier-1 computing center hosts 130 kHS06 computing resources and 14PB disk and 17PB tape space. These resources are shared between the four LHC VOs and a number of national and international VOs from high energy physics and other sciences. CernVM-FS has been deployed at GridKa to supplement the existing NFS-based system to access VO software on the worker nodes. It provides a solution tailored to the requirement of the LHC VOs. We will focus on the first operational experiences and the monitoring of CernVM-FS on the worker nodes and the squid caches.

  11. Extending the farm on external sites: the INFN Tier-1 experience

    NASA Astrophysics Data System (ADS)

    Boccali, T.; Cavalli, A.; Chiarelli, L.; Chierici, A.; Cesini, D.; Ciaschini, V.; Dal Pra, S.; dell'Agnello, L.; De Girolamo, D.; Falabella, A.; Fattibene, E.; Maron, G.; Prosperini, A.; Sapunenko, V.; Virgilio, S.; Zani, S.

    2017-10-01

    The Tier-1 at CNAF is the main INFN computing facility offering computing and storage resources to more than 30 different scientific collaborations including the 4 experiments at the LHC. It is also foreseen a huge increase in computing needs in the following years mainly driven by the experiments at the LHC (especially starting with the run 3 from 2021) but also by other upcoming experiments such as CTA[1] While we are considering the upgrade of the infrastructure of our data center, we are also evaluating the possibility of using CPU resources available in other data centres or even leased from commercial cloud providers. Hence, at INFN Tier-1, besides participating to the EU project HNSciCloud, we have also pledged a small amount of computing resources (˜ 2000 cores) located at the Bari ReCaS[2] for the WLCG experiments for 2016 and we are testing the use of resources provided by a commercial cloud provider. While the Bari ReCaS data center is directly connected to the GARR network[3] with the obvious advantage of a low latency and high bandwidth connection, in the case of the commercial provider we rely only on the General Purpose Network. In this paper we describe the set-up phase and the first results of these installations started in the last quarter of 2015, focusing on the issues that we have had to cope with and discussing the measured results in terms of efficiency.

  12. 76 FR 14669 - Privacy Act of 1974; CMS Computer Match No. 2011-02; HHS Computer Match No. 1007

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-17

    ... (CMS); and Department of Defense (DoD), Manpower Data Center (DMDC), Defense Enrollment and Eligibility... the results of the computer match and provide the information to TMA for use in its matching program... under TRICARE. DEERS will receive the results of the computer match and provide the information provided...

  13. ATLAS WORLD-cloud and networking in PanDA

    NASA Astrophysics Data System (ADS)

    Barreiro Megino, F.; De, K.; Di Girolamo, A.; Maeno, T.; Walker, R.; ATLAS Collaboration

    2017-10-01

    The ATLAS computing model was originally designed as static clouds (usually national or geographical groupings of sites) around the Tier 1 centres, which confined tasks and most of the data traffic. Since those early days, the sites’ network bandwidth has increased at 0(1000) and the difference in functionalities between Tier 1s and Tier 2s has reduced. After years of manual, intermediate solutions, we have now ramped up to full usage of World-cloud, the latest step in the PanDA Workload Management System to increase resource utilization on the ATLAS Grid, for all workflows (MC production, data (re)processing, etc.). We have based the development on two new site concepts. Nuclei sites are the Tier 1s and large Tier 2s, where tasks will be assigned and the output aggregated, and satellites are the sites that will execute the jobs and send the output to their nucleus. PanDA dynamically pairs nuclei and satellite sites for each task based on the input data availability, capability matching, site load and network connectivity. This contribution will introduce the conceptual changes for World-cloud, the development necessary in PanDA, an insight into the network model and the first half-year of operational experience.

  14. 20 CFR 225.20 - General.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... DETERMINATIONS PIA's Used in Computing Survivor Annuities and the Amount of the Residual Lump-Sum Payable § 225.20 General. The Survivor Tier I PIA and the Employee RIB PIA are used in computing the tier I component of a survivor annuity. The Combined Earnings PIA, Social Security Earnings PIA and Railroad...

  15. Experimental High Energy Physics Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hohlmann, Marcus

    This final report summarizes activities of the Florida Tech High Energy Physics group supported by DOE under grant #DE-SC0008024 during the period June 2012 – March 2015. We focused on one of the main HEP research thrusts at the Energy Frontier by participating in the CMS experiment. We were exploiting the tremendous physics opportunities at the Large Hadron Collider (LHC) and prepared for physics at its planned extension, the High-Luminosity LHC. The effort comprised a physics component with analysis of data from the first LHC run and contributions to the CMS Phase-2 upgrades in the muon endcap system (EMU) formore » the High-Luminosity LHC. The emphasis of our hardware work was the development of large-area Gas Electron Multipliers (GEMs) for the CMS forward muon upgrade. We built a production and testing site for such detectors at Florida Tech to complement future chamber production at CERN. The first full-scale CMS GE1/1 chamber prototype ever built outside of CERN was constructed at Florida Tech in summer 2013. We conducted two beam tests with GEM prototype chambers at CERN in 2012 and at FNAL in 2013 and reported the results at conferences and in publications. Principal Investigator Hohlmann served as chair of the collaboration board of the CMS GEM collaboration and as co-coordinator of the GEM detector working group. He edited and authored sections of the detector chapter of the Technical Design Report (TDR) for the GEM muon upgrade, which was approved by the LHCC and the CERN Research Board in 2015. During the course of the TDR approval process, the GEM project was also established as an official subsystem of the muon system by the CMS muon institution board. On the physics side, graduate student Kalakhety performed a Z' search in the dimuon channel with the 2011 and 2012 CMS datasets that utilized 20.6 fb⁻¹ of p-p collisions at √s = 8 TeV. For the dimuon channel alone, the 95% CL lower limits obtained on the mass of a Z' resonance are 2770 GeV for a Z' with the same standard-model couplings as the Z boson. Our student team operated a Tier-3 cluster on the Open Science Grid (OSG) to support local CMS physics analysis and remote OSG activity. As a service to the HEP community, Hohlmann participated in the Snowmass effort over the course of 2013. Specifically, he acted as a liaison for gaseous detectors between the Instrumentation Frontier and the Energy Frontier and contributed to five papers and reports submitted to the summer study.« less

  16. Storageless and caching Tier-2 models in the UK context

    NASA Astrophysics Data System (ADS)

    Cadellin Skipsey, Samuel; Dewhurst, Alastair; Crooks, David; MacMahon, Ewan; Roy, Gareth; Smith, Oliver; Mohammed, Kashif; Brew, Chris; Britton, David

    2017-10-01

    Operational and other pressures have lead to WLCG experiments moving increasingly to a stratified model for Tier-2 resources, where “fat” Tier-2s (“T2Ds”) and “thin” Tier-2s (“T2Cs”) provide different levels of service. In the UK, this distinction is also encouraged by the terms of the current GridPP5 funding model. In anticipation of this, testing has been performed on the implications, and potential implementation, of such a distinction in our resources. In particular, this presentation presents the results of testing of storage T2Cs, where the “thin” nature is expressed by the site having either no local data storage, or only a thin caching layer; data is streamed or copied from a “nearby” T2D when needed by jobs. In OSG, this model has been adopted successfully for CMS AAA sites; but the network topology and capacity in the USA is significantly different to that in the UK (and much of Europe). We present the result of several operational tests: the in-production University College London (UCL) site, which runs ATLAS workloads using storage at the Queen Mary University of London (QMUL) site; the Oxford site, which has had scaling tests performed against T2Ds in various locations in the UK (to test network effects); and the Durham site, which has been testing the specific ATLAS caching solution of “Rucio Cache” integration with ARC’s caching layer.

  17. LHCb experience with LFC replication

    NASA Astrophysics Data System (ADS)

    Bonifazi, F.; Carbone, A.; Perez, E. D.; D'Apice, A.; dell'Agnello, L.; Duellmann, D.; Girone, M.; Re, G. L.; Martelli, B.; Peco, G.; Ricci, P. P.; Sapunenko, V.; Vagnoni, V.; Vitlacil, D.

    2008-07-01

    Database replication is a key topic in the framework of the LHC Computing Grid to allow processing of data in a distributed environment. In particular, the LHCb computing model relies on the LHC File Catalog, i.e. a database which stores information about files spread across the GRID, their logical names and the physical locations of all the replicas. The LHCb computing model requires the LFC to be replicated at Tier-1s. The LCG 3D project deals with the database replication issue and provides a replication service based on Oracle Streams technology. This paper describes the deployment of the LHC File Catalog replication to the INFN National Center for Telematics and Informatics (CNAF) and to other LHCb Tier-1 sites. We performed stress tests designed to evaluate any delay in the propagation of the streams and the scalability of the system. The tests show the robustness of the replica implementation with performance going much beyond the LHCb requirements.

  18. ATLAS Distributed Computing Experience and Performance During the LHC Run-2

    NASA Astrophysics Data System (ADS)

    Filipčič, A.; ATLAS Collaboration

    2017-10-01

    ATLAS Distributed Computing during LHC Run-1 was challenged by steadily increasing computing, storage and network requirements. In addition, the complexity of processing task workflows and their associated data management requirements led to a new paradigm in the ATLAS computing model for Run-2, accompanied by extensive evolution and redesign of the workflow and data management systems. The new systems were put into production at the end of 2014, and gained robustness and maturity during 2015 data taking. ProdSys2, the new request and task interface; JEDI, the dynamic job execution engine developed as an extension to PanDA; and Rucio, the new data management system, form the core of Run-2 ATLAS distributed computing engine. One of the big changes for Run-2 was the adoption of the Derivation Framework, which moves the chaotic CPU and data intensive part of the user analysis into the centrally organized train production, delivering derived AOD datasets to user groups for final analysis. The effectiveness of the new model was demonstrated through the delivery of analysis datasets to users just one week after data taking, by completing the calibration loop, Tier-0 processing and train production steps promptly. The great flexibility of the new system also makes it possible to execute part of the Tier-0 processing on the grid when Tier-0 resources experience a backlog during high data-taking periods. The introduction of the data lifetime model, where each dataset is assigned a finite lifetime (with extensions possible for frequently accessed data), was made possible by Rucio. Thanks to this the storage crises experienced in Run-1 have not reappeared during Run-2. In addition, the distinction between Tier-1 and Tier-2 disk storage, now largely artificial given the quality of Tier-2 resources and their networking, has been removed through the introduction of dynamic ATLAS clouds that group the storage endpoint nucleus and its close-by execution satellite sites. All stable ATLAS sites are now able to store unique or primary copies of the datasets. ATLAS Distributed Computing is further evolving to speed up request processing by introducing network awareness, using machine learning and optimisation of the latencies during the execution of the full chain of tasks. The Event Service, a new workflow and job execution engine, is designed around check-pointing at the level of event processing to use opportunistic resources more efficiently. ATLAS has been extensively exploring possibilities of using computing resources extending beyond conventional grid sites in the WLCG fabric to deliver as many computing cycles as possible and thereby enhance the significance of the Monte-Carlo samples to deliver better physics results. The exploitation of opportunistic resources was at an early stage throughout 2015, at the level of 10% of the total ATLAS computing power, but in the next few years it is expected to deliver much more. In addition, demonstrating the ability to use an opportunistic resource can lead to securing ATLAS allocations on the facility, hence the importance of this work goes beyond merely the initial CPU cycles gained. In this paper, we give an overview and compare the performance, development effort, flexibility and robustness of the various approaches.

  19. The JINR Tier1 Site Simulation for Research and Development Purposes

    NASA Astrophysics Data System (ADS)

    Korenkov, V.; Nechaevskiy, A.; Ososkov, G.; Pryahina, D.; Trofimov, V.; Uzhinskiy, A.; Voytishin, N.

    2016-02-01

    Distributed complex computing systems for data storage and processing are in common use in the majority of modern scientific centers. The design of such systems is usually based on recommendations obtained via a preliminary simulated model used and executed only once. However big experiments last for years and decades, and the development of their computing system is going on, not only quantitatively but also qualitatively. Even with the substantial efforts invested in the design phase to understand the systems configuration, it would be hard enough to develop a system without additional research of its future evolution. The developers and operators face the problem of the system behaviour predicting after the planned modifications. A system for grid and cloud services simulation is developed at LIT (JINR, Dubna). This simulation system is focused on improving the effciency of the grid/cloud structures development by using the work quality indicators of some real system. The development of such kind of software is very important for making a new grid/cloud infrastructure for such big scientific experiments like the JINR Tier1 site for WLCG. The simulation of some processes of the Tier1 site is considered as an example of our application approach.

  20. Developing an Educational Computer Game for Migratory Bird Identification Based on a Two-Tier Test Approach

    ERIC Educational Resources Information Center

    Chu, Hui-Chun; Chang, Shao-Chen

    2014-01-01

    Although educational computer games have been recognized as being a promising approach, previous studies have indicated that, without supportive models, students might only show temporary interest during the game-based learning process, and their learning performance is often not as good as expected. Therefore, in this paper, a two-tier test…

  1. Exploiting volatile opportunistic computing resources with Lobster

    NASA Astrophysics Data System (ADS)

    Woodard, Anna; Wolf, Matthias; Mueller, Charles; Tovar, Ben; Donnelly, Patrick; Hurtado Anampa, Kenyi; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2015-12-01

    Analysis of high energy physics experiments using the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) can be limited by availability of computing resources. As a joint effort involving computer scientists and CMS physicists at Notre Dame, we have developed an opportunistic workflow management tool, Lobster, to harvest available cycles from university campus computing pools. Lobster consists of a management server, file server, and worker processes which can be submitted to any available computing resource without requiring root access. Lobster makes use of the Work Queue system to perform task management, while the CMS specific software environment is provided via CVMFS and Parrot. Data is handled via Chirp and Hadoop for local data storage and XrootD for access to the CMS wide-area data federation. An extensive set of monitoring and diagnostic tools have been developed to facilitate system optimisation. We have tested Lobster using the 20 000-core cluster at Notre Dame, achieving approximately 8-10k tasks running simultaneously, sustaining approximately 9 Gbit/s of input data and 340 Mbit/s of output data.

  2. The ATLAS Tier-0: Overview and operational experience

    NASA Astrophysics Data System (ADS)

    Elsing, Markus; Goossens, Luc; Nairz, Armin; Negri, Guido

    2010-04-01

    Within the ATLAS hierarchical, multi-tier computing infrastructure, the Tier-0 centre at CERN is mainly responsible for prompt processing of the raw data coming from the online DAQ system, to archive the raw and derived data on tape, to register the data with the relevant catalogues and to distribute them to the associated Tier-1 centers. The Tier-0 is already fully functional. It has been successfully participating in all cosmic and commissioning data taking since May 2007, and was ramped up to its foreseen full size, performance and throughput for the cosmic (and short single-beam) run periods between July and October 2008. Data and work flows for collision data taking were exercised in several "Full Dress Rehearsals" (FDRs) in the course of 2008. The transition from an expert to a shifter-based system was successfully established in July 2008. This article will give an overview of the Tier-0 system, its data and work flows, and operations model. It will review the operational experience gained in cosmic, commissioning, and FDR exercises during the past year. And it will give an outlook on planned developments and the evolution of the system towards first collision data taking expected now in late Autumn 2009.

  3. HTTP as a Data Access Protocol: Trials with XrootD in CMS’s AAA Project

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B. P.; Kcira, D.; Newman, H.; Vlimant, J.; Hendricks, T. W.; CMS Collaboration

    2017-10-01

    The main goal of the project to demonstrate the ability of using HTTP data federations in a manner analogous to the existing AAA infrastructure of the CMS experiment. An initial testbed at Caltech has been built and changes in the CMS software (CMSSW) are being implemented in order to improve HTTP support. The testbed consists of a set of machines at the Caltech Tier2 that improve the support infrastructure for data federations at CMS. As a first step, we are building systems that produce and ingest network data transfers up to 80 Gbps. In collaboration with AAA, HTTP support is enabled at the US redirector and the Caltech testbed. A plugin for CMSSW is being developed for HTTP access based on the DaviX software. It will replace the present fork/exec or curl for HTTP access. In addition, extensions to the XRootD HTTP implementation are being developed to add functionality to it, such as client-based monitoring identifiers. In the future, patches will be developed to better integrate HTTP-over-XRootD with the Open Science Grid (OSG) distribution. First results of the transfer tests using HTTP are presented in this paper together with details about the initial setup.

  4. 76 FR 33380 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-08

    ... Two New Pricing Tiers, Investor Tier 1 and Investor Tier 2 June 3, 2011. Pursuant to Section 19(b)(1... Services (the ``Schedule'') to introduce two new pricing tiers, Investor Tier 1 and Investor Tier 2. The... proposes to introduce two new pricing tier levels, Investor Tier 1 and Investor Tier 2. Investor Tier 1...

  5. Tier2 Submit Software

    EPA Pesticide Factsheets

    Download this tool for Windows or Mac, which helps facilities prepare a Tier II electronic chemical inventory report. The data can also be exported into the CAMEOfm (Computer-Aided Management of Emergency Operations) emergency planning software.

  6. Grid Computing at GSI for ALICE and FAIR - present and future

    NASA Astrophysics Data System (ADS)

    Schwarz, Kilian; Uhlig, Florian; Karabowicz, Radoslaw; Montiel-Gonzalez, Almudena; Zynovyev, Mykhaylo; Preuss, Carsten

    2012-12-01

    The future FAIR experiments CBM and PANDA have computing requirements that fall in a category that could currently not be satisfied by one single computing centre. One needs a larger, distributed computing infrastructure to cope with the amount of data to be simulated and analysed. Since 2002, GSI operates a tier2 center for ALICE@CERN. The central component of the GSI computing facility and hence the core of the ALICE tier2 centre is a LSF/SGE batch farm, currently split into three subclusters with a total of 15000 CPU cores shared by the participating experiments, and accessible both locally and soon also completely via Grid. In terms of data storage, a 5.5 PB Lustre file system, directly accessible from all worker nodes is maintained, as well as a 300 TB xrootd-based Grid storage element. Based on this existing expertise, and utilising ALICE's middleware ‘AliEn’, the Grid infrastructure for PANDA and CBM is being built. Besides a tier0 centre at GSI, the computing Grids of the two FAIR collaborations encompass now more than 17 sites in 11 countries and are constantly expanding. The operation of the distributed FAIR computing infrastructure benefits significantly from the experience gained with the ALICE tier2 centre. A close collaboration between ALICE Offline and FAIR provides mutual advantages. The employment of a common Grid middleware as well as compatible simulation and analysis software frameworks ensure significant synergy effects.

  7. Comparison of the accuracy of maxillary position between conventional model surgery and virtual surgical planning.

    PubMed

    Ritto, F G; Schmitt, A R M; Pimentel, T; Canellas, J V; Medeiros, P J

    2018-02-01

    The aim of this study was to determine whether virtual surgical planning (VSP) is an accurate method for positioning the maxilla when compared to conventional articulator model surgery (CMS), through the superimposition of computed tomography (CT) images. This retrospective study included the records of 30 adult patients submitted to bimaxillary orthognathic surgery. Two groups were created according to the treatment planning performed: CMS and VSP. The treatment planning protocol was the same for all patients. Pre- and postoperative CT images were superimposed and the linear distances between upper jaw reference points were measured. Measurements were then compared to the treatment planning, and the difference in accuracy between CMS and VSP was determined using the t-test for independent samples. The success criterion adopted was a mean linear difference of <2mm. The mean linear difference between planned and obtained movements for CMS was 1.27±1.05mm, and for VSP was 1.20±1.08mm. With CMS, 80% of overlapping reference points had a difference of <2mm, while for VSP this value was 83.6%. There was no statistically significant difference between the two techniques regarding accuracy (P>0.05). Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  8. Changing the batch system in a Tier 1 computing center: why and how

    NASA Astrophysics Data System (ADS)

    Chierici, Andrea; Dal Pra, Stefano

    2014-06-01

    At the Italian Tierl Center at CNAF we are evaluating the possibility to change the current production batch system. This activity is motivated mainly because we are looking for a more flexible licensing model as well as to avoid vendor lock-in. We performed a technology tracking exercise and among many possible solutions we chose to evaluate Grid Engine as an alternative because its adoption is increasing in the HEPiX community and because it's supported by the EMI middleware that we currently use on our computing farm. Another INFN site evaluated Slurm and we will compare our results in order to understand pros and cons of the two solutions. We will present the results of our evaluation of Grid Engine, in order to understand if it can fit the requirements of a Tier 1 center, compared to the solution we adopted long ago. We performed a survey and a critical re-evaluation of our farming infrastructure: many production softwares (accounting and monitoring on top of all) rely on our current solution and changing it required us to write new wrappers and adapt the infrastructure to the new system. We believe the results of this investigation can be very useful to other Tier-ls and Tier-2s centers in a similar situation, where the effort of switching may appear too hard to stand. We will provide guidelines in order to understand how difficult this operation can be and how long the change may take.

  9. 76 FR 40974 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-12

    ... Two New Pricing Tiers, Step-Up Tier 1 and Step-Up Tier 2 July 6, 2011. Pursuant to Section 19(b)(1) of... Services (the ``Schedule'') to introduce two new pricing tiers, Step-Up Tier 1 and Step-Up Tier 2. The text... Arca proposes to introduce two new pricing tier levels, Step-Up Tier 1 and Step-Up Tier 2. Step-Up Tier...

  10. A transient response analysis of the space shuttle vehicle during liftoff

    NASA Technical Reports Server (NTRS)

    Brunty, J. A.

    1990-01-01

    A proposed transient response method is formulated for the liftoff analysis of the space shuttle vehicles. It uses a power series approximation with unknown coefficients for the interface forces between the space shuttle and mobile launch platform. This allows the equation of motion of the two structures to be solved separately with the unknown coefficients at the end of each step. These coefficients are obtained by enforcing the interface compatibility conditions between the two structures. Once the unknown coefficients are determined, the total response is computed for that time step. The method is validated by a numerical example of a cantilevered beam and by the liftoff analysis of the space shuttle vehicles. The proposed method is compared to an iterative transient response analysis method used by Martin Marietta for their space shuttle liftoff analysis. It is shown that the proposed method uses less computer time than the iterative method and does not require as small a time step for integration. The space shuttle vehicle model is reduced using two different types of component mode synthesis (CMS) methods, the Lanczos method and the Craig and Bampton CMS method. By varying the cutoff frequency in the Craig and Bampton method it was shown that the space shuttle interface loads can be computed with reasonable accuracy. Both the Lanczos CMS method and Craig and Bampton CMS method give similar results. A substantial amount of computer time is saved using the Lanczos CMS method over that of the Craig and Bampton method. However, when trying to compute a large number of Lanczos vectors, input/output computer time increased and increased the overall computer time. The application of several liftoff release mechanisms that can be adapted to the proposed method are discussed.

  11. 20 CFR 228.16 - Adjustments in the age reduction factor (ARF).

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Adjustments in the age reduction factor (ARF... RETIREMENT ACT COMPUTATION OF SURVIVOR ANNUITIES The Tier I Annuity Component § 228.16 Adjustments in the age reduction factor (ARF). Upon the attainment of retirement age, the previously-computed age reduction factor...

  12. 20 CFR 228.16 - Adjustments in the age reduction factor (ARF).

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Adjustments in the age reduction factor (ARF... RETIREMENT ACT COMPUTATION OF SURVIVOR ANNUITIES The Tier I Annuity Component § 228.16 Adjustments in the age reduction factor (ARF). Upon the attainment of retirement age, the previously-computed age reduction factor...

  13. 20 CFR 228.16 - Adjustments in the age reduction factor (ARF).

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Adjustments in the age reduction factor (ARF... RETIREMENT ACT COMPUTATION OF SURVIVOR ANNUITIES The Tier I Annuity Component § 228.16 Adjustments in the age reduction factor (ARF). Upon the attainment of retirement age, the previously-computed age reduction factor...

  14. 20 CFR 228.16 - Adjustments in the age reduction factor (ARF).

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Adjustments in the age reduction factor (ARF... RETIREMENT ACT COMPUTATION OF SURVIVOR ANNUITIES The Tier I Annuity Component § 228.16 Adjustments in the age reduction factor (ARF). Upon the attainment of retirement age, the previously-computed age reduction factor...

  15. 20 CFR 228.16 - Adjustments in the age reduction factor (ARF).

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Adjustments in the age reduction factor (ARF... RETIREMENT ACT COMPUTATION OF SURVIVOR ANNUITIES The Tier I Annuity Component § 228.16 Adjustments in the age reduction factor (ARF). Upon the attainment of retirement age, the previously-computed age reduction factor...

  16. 20 CFR 226.14 - Employee regular annuity rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee regular annuity rate. 226.14 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing an Employee Annuity § 226.14 Employee regular annuity rate. The regular annuity rate payable to the employee is the total of the employee tier I...

  17. Benchmarking high performance computing architectures with CMS’ skeleton framework

    NASA Astrophysics Data System (ADS)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  18. Future Approach to tier-0 extension

    NASA Astrophysics Data System (ADS)

    Jones, B.; McCance, G.; Cordeiro, C.; Giordano, D.; Traylen, S.; Moreno García, D.

    2017-10-01

    The current tier-0 processing at CERN is done on two managed sites, the CERN computer centre and the Wigner computer centre. With the proliferation of public cloud resources at increasingly competitive prices, we have been investigating how to transparently increase our compute capacity to include these providers. The approach taken has been to integrate these resources using our existing deployment and computer management tools and to provide them in a way that exposes them to users as part of the same site. The paper will describe the architecture, the toolset and the current production experiences of this model.

  19. Tracking at High Level Trigger in CMS

    NASA Astrophysics Data System (ADS)

    Tosi, M.

    2016-04-01

    The trigger systems of the LHC detectors play a crucial role in determining the physics capabilities of experiments. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with detector readout, offline storage and analysis capability. The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger (L1T), implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms, the sustainable output rate, and the selection efficiency. With the computing power available during the 2012 data taking the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. Track reconstruction algorithms are widely used in the HLT, for the reconstruction of the physics objects as well as in the identification of b-jets and lepton isolation. Reconstructed tracks are also used to distinguish the primary vertex, which identifies the hard interaction process, from the pileup ones. This task is particularly important in the LHC environment given the large number of interactions per bunch crossing: on average 25 in 2012, and expected to be around 40 in Run II. We will present the performance of HLT tracking algorithms, discussing its impact on CMS physics program, as well as new developments done towards the next data taking in 2015.

  20. Analyzing the Use of Concept Maps in Computer Science: A Systematic Mapping Study

    ERIC Educational Resources Information Center

    dos Santos, Vinicius; de Souza, Érica F.; Felizardo, Katia R; Vijaykumar, Nandamudi L.

    2017-01-01

    Context: concept Maps (CMs) enable the creation of a schematic representation of a domain knowledge. For this reason, CMs have been applied in different research areas, including Computer Science. Objective: the objective of this paper is to present the results of a systematic mapping study conducted to collect and evaluate existing research on…

  1. Investigating Gender and Racial/Ethnic Invariance in Use of a Course Management System in Higher Education

    ERIC Educational Resources Information Center

    Li, Yi; Wang, Qiu; Campbell, John

    2015-01-01

    This study focused on learning equity in colleges and universities where teaching and learning depends heavily on computer technologies. The study used the Structural Equation Modeling (SEM) to investigate gender and racial/ethnic heterogeneity in the use of a computer based course management system (CMS). Two latent variables (CMS usage and…

  2. Validation and Application of a Real-time PCR Protocol for the Specific Detection and Quantification of Clavibacter michiganensis subsp. sepedonicus in Potato.

    PubMed

    Cho, Min Seok; Park, Duck Hwan; Namgung, Min; Ahn, Tae-Young; Park, Dong Suk

    2015-06-01

    Clavibacter michiganensis subsp. sepedonicus (Cms) multiplies very rapidly, passing through the vascular strands and into the stems and petioles of a diseased potato. Therefore, the rapid and specific detection of this pathogen is highly important for the effective control of the pathogen. Although several PCR assays have been developed for detection, they cannot afford specific detection of Cms. Therefore, in this study, a computational genome analysis was performed to compare the sequenced genomes of the C. michiganensis subspecies and to identify an appropriate gene for the development of a subspecies-specific PCR primer set (Cms89F/R). The specificity of the primer set based on the putative phage-related protein was evaluated using genomic DNA from seven isolates of Cms and 27 other reference strains. The Cms89F/R primer set was more specific and sensitive than the existing assays in detecting Cms in in vitro using Cms cells and its genomic DNA. This assay was also able to detect at least 1.47×10(2) copies/μl of cloned-amplified target DNA, 5 fg of DNA using genomic DNA or 10(-6) dilution point of 0.12 at OD600 units of cells per reaction using a calibrated cell suspension.

  3. Validation and Application of a Real-time PCR Protocol for the Specific Detection and Quantification of Clavibacter michiganensis subsp. sepedonicus in Potato

    PubMed Central

    Cho, Min Seok; Park, Duck Hwan; Namgung, Min; Ahn, Tae-Young; Park, Dong Suk

    2015-01-01

    Clavibacter michiganensis subsp. sepedonicus (Cms) multiplies very rapidly, passing through the vascular strands and into the stems and petioles of a diseased potato. Therefore, the rapid and specific detection of this pathogen is highly important for the effective control of the pathogen. Although several PCR assays have been developed for detection, they cannot afford specific detection of Cms. Therefore, in this study, a computational genome analysis was performed to compare the sequenced genomes of the C. michiganensis subspecies and to identify an appropriate gene for the development of a subspecies-specific PCR primer set (Cms89F/R). The specificity of the primer set based on the putative phage-related protein was evaluated using genomic DNA from seven isolates of Cms and 27 other reference strains. The Cms89F/R primer set was more specific and sensitive than the existing assays in detecting Cms in in vitro using Cms cells and its genomic DNA. This assay was also able to detect at least 1.47×102 copies/μl of cloned-amplified target DNA, 5 fg of DNA using genomic DNA or 10−6 dilution point of 0.12 at OD600 units of cells per reaction using a calibrated cell suspension. PMID:26060431

  4. A search for a heavy Majorana neutrino and a radiation damage simulation for the HF detector

    NASA Astrophysics Data System (ADS)

    Wetzel, James William

    A search for heavy Majorana neutrinos is performed using an event signature defined by two same-sign muons accompanied by two jets. This search is an extension of previous searches, (L3, DELPHI, CMS, ATLAS), using 19.7 fb -1 of data from the 2012 Large Hadron Collider experimental run collected by the Compact Muon Solenoid experiment. A mass window of 40-500 GeV/ c2 is explored. No excess events above Standard Model backgrounds is observed, and limits are set on the mixing element squared, |VmuN|2, as a function of Majorana neutFnrino mass. The Hadronic Forward (HF) Detector's performance will degrade as a function of the number of particles delivered to the detector over time, a quantity referred to as integrated luminosity and measured in inverse femtobarns (fb-1). In order to better plan detector upgrades, the CMS Forward Calorimetry Task Force (FCAL) group and the CMS Hadronic Calorimeter (HCAL) group have requested that radiation damage be simulated and the subsequent performance of the HF subdetector be studied. The simulation was implemented into both the CMS FastSim and CMS FullSim simulation packages. Standard calorimetry performance metrics were computed and are reported. The HF detector can expect to perform well through the planned delivery of 3000 fb-1.

  5. Tiered Human Integrated Sequence Search Databases for Shotgun Proteomics.

    PubMed

    Deutsch, Eric W; Sun, Zhi; Campbell, David S; Binz, Pierre-Alain; Farrah, Terry; Shteynberg, David; Mendoza, Luis; Omenn, Gilbert S; Moritz, Robert L

    2016-11-04

    The results of analysis of shotgun proteomics mass spectrometry data can be greatly affected by the selection of the reference protein sequence database against which the spectra are matched. For many species there are multiple sources from which somewhat different sequence sets can be obtained. This can lead to confusion about which database is best in which circumstances-a problem especially acute in human sample analysis. All sequence databases are genome-based, with sequences for the predicted gene and their protein translation products compiled. Our goal is to create a set of primary sequence databases that comprise the union of sequences from many of the different available sources and make the result easily available to the community. We have compiled a set of four sequence databases of varying sizes, from a small database consisting of only the ∼20,000 primary isoforms plus contaminants to a very large database that includes almost all nonredundant protein sequences from several sources. This set of tiered, increasingly complete human protein sequence databases suitable for mass spectrometry proteomics sequence database searching is called the Tiered Human Integrated Search Proteome set. In order to evaluate the utility of these databases, we have analyzed two different data sets, one from the HeLa cell line and the other from normal human liver tissue, with each of the four tiers of database complexity. The result is that approximately 0.8%, 1.1%, and 1.5% additional peptides can be identified for Tiers 2, 3, and 4, respectively, as compared with the Tier 1 database, at substantially increasing computational cost. This increase in computational cost may be worth bearing if the identification of sequence variants or the discovery of sequences that are not present in the reviewed knowledge base entries is an important goal of the study. We find that it is useful to search a data set against a simpler database, and then check the uniqueness of the discovered peptides against a more complex database. We have set up an automated system that downloads all the source databases on the first of each month and automatically generates a new set of search databases and makes them available for download at http://www.peptideatlas.org/thisp/ .

  6. Tiered Human Integrated Sequence Search Databases for Shotgun Proteomics

    PubMed Central

    Deutsch, Eric W.; Sun, Zhi; Campbell, David S.; Binz, Pierre-Alain; Farrah, Terry; Shteynberg, David; Mendoza, Luis; Omenn, Gilbert S.; Moritz, Robert L.

    2016-01-01

    The results of analysis of shotgun proteomics mass spectrometry data can be greatly affected by the selection of the reference protein sequence database against which the spectra are matched. For many species there are multiple sources from which somewhat different sequence sets can be obtained. This can lead to confusion about which database is best in which circumstances – a problem especially acute in human sample analysis. All sequence databases are genome-based, with sequences for the predicted gene and their protein translation products compiled. Our goal is to create a set of primary sequence databases that comprise the union of sequences from many of the different available sources and make the result easily available to the community. We have compiled a set of four sequence databases of varying sizes, from a small database consisting of only the ~20,000 primary isoforms plus contaminants to a very large database that includes almost all non-redundant protein sequences from several sources. This set of tiered, increasingly complete human protein sequence databases suitable for mass spectrometry proteomics sequence database searching is called the Tiered Human Integrated Search Proteome set. In order to evaluate the utility of these databases, we have analyzed two different data sets, one from the HeLa cell line and the other from normal human liver tissue, with each of the four tiers of database complexity. The result is that approximately 0.8%, 1.1%, and 1.5% additional peptides can be identified for Tiers 2, 3, and 4, respectively, as compared with the Tier 1 database, at substantially increasing computational cost. This increase in computational cost may be worth bearing if the identification of sequence variants or the discovery of sequences that are not present in the reviewed knowledge base entries is an important goal of the study. We find that it is useful to search a data set against a simpler database, and then check the uniqueness of the discovered peptides against a more complex database. We have set up an automated system that downloads all the source databases on the first of each month and automatically generates a new set of search databases and makes them available for download at http://www.peptideatlas.org/thisp/. PMID:27577934

  7. 45 CFR 150.429 - Computation of time and extensions of time.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Computation of time and extensions of time. 150.429 Section 150.429 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS CMS ENFORCEMENT IN GROUP AND INDIVIDUAL INSURANCE MARKETS Administrative Hearings...

  8. 45 CFR 150.429 - Computation of time and extensions of time.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Computation of time and extensions of time. 150.429 Section 150.429 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS CMS ENFORCEMENT IN GROUP AND INDIVIDUAL INSURANCE MARKETS Administrative Hearings...

  9. The CMS High Level Trigger System: Experience and Future Development

    NASA Astrophysics Data System (ADS)

    Bauer, G.; Behrens, U.; Bowen, M.; Branson, J.; Bukowiec, S.; Cittolin, S.; Coarasa, J. A.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Flossdorf, A.; Gigi, D.; Glege, F.; Gomez-Reino, R.; Hartl, C.; Hegeman, J.; Holzner, A.; Hwong, Y. L.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Polese, G.; Racz, A.; Raginel, O.; Sakulin, H.; Sani, M.; Schwick, C.; Shpakov, D.; Simon, S.; Spataru, A. C.; Sumorok, K.

    2012-12-01

    The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.

  10. Candidate Reference Genes Selection and Application for RT-qPCR Analysis in Kenaf with Cytoplasmic Male Sterility Background

    PubMed Central

    Zhou, Bujin; Chen, Peng; Khan, Aziz; Zhao, Yanhong; Chen, Lihong; Liu, Dongmei; Liao, Xiaofang; Kong, Xiangjun; Zhou, Ruiyang

    2017-01-01

    Cytoplasmic male sterility (CMS) is a maternally inherited trait that results in the production of dysfunctional pollen. Based on reliable reference gene-normalized real-time quantitative PCR (RT-qPCR) data, examining gene expression profile can provide valuable information on the molecular mechanism of kenaf CMS. However, studies have not been conducted regarding selection of reference genes for normalizing RT-qPCR data in the CMS and maintainer lines of kenaf crop. Therefore, we studied 10 candidate reference genes (ACT3, ELF1A, G6PD, PEPKR1, TUB, TUA, CYP, GAPDH, H3, and 18S) to assess their expression stability at three stages of pollen development in CMS line 722A and maintainer line 722B of kenaf. Five computational statistical approaches (GeNorm, NormFinder, ΔCt, BestKeeper, and RefFinder) were used to evaluate the expression stability levels of these genes. According to RefFinder and GeNorm, the combination of TUB, CYP, and PEPKR1 was identified as an internal control for the accurate normalization across all sample set, which was further confirmed by validating the expression of HcPDIL5-2a. Furthermore, the combination of TUB, CYP, and PEPKR1 was used to differentiate the expression pattern of five mitochondria F1F0-ATPase subunit genes (atp1, atp4, atp6, atp8, and atp9) by RT-qPCR during pollen development in CMS line 722A and maintainer line 722B. We found that atp1, atp6, and atp9 exhibited significantly different expression patterns during pollen development in line 722A compared with line 722B. This is the first systematic study of reference genes selection for CMS and will provide useful information for future research on the gene expressions and molecular mechanisms underlying CMS in kenaf. PMID:28919905

  11. Experience in using commercial clouds in CMS

    NASA Astrophysics Data System (ADS)

    Bauerdick, L.; Bockelman, B.; Dykstra, D.; Fuess, S.; Garzoglio, G.; Girone, M.; Gutsche, O.; Holzman, B.; Hufnagel, D.; Kim, H.; Kennedy, R.; Mason, D.; Spentzouris, P.; Timm, S.; Tiradani, A.; Vaandering, E.; CMS Collaboration

    2017-10-01

    Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.

  12. Experience in using commercial clouds in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauerdick, L.; Bockelman, B.; Dykstra, D.

    Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is amore » growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.« less

  13. 78 FR 42080 - Privacy Act of 1974; CMS Computer Match No. 2013-07; HHS Computer Match No. 1303; DoD-DMDC Match...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-15

    ... with the Department of Defense (DoD), Defense Manpower Data Center (DMDC). We have provided background... & Medicaid Services and the Department of Defense, Defense Manpower Data Center for the Determination of...), Centers for Medicare & Medicaid Services (CMS), and Department of Defense (DoD), Defense Manpower Data...

  14. Benchmarking high performance computing architectures with CMS’ skeleton framework

    DOE PAGES

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-11-23

    Here, in 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta,more » Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.« less

  15. Benchmarking high performance computing architectures with CMS’ skeleton framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    Here, in 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta,more » Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.« less

  16. 42 CFR 413.337 - Methodology for calculating the prospective payment rates.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... excluded from the data base used to compute the Federal payment rates. In addition, allowable costs related to exceptions payments under § 413.30(f) are excluded from the data base used to compute the Federal... prospective payment rates. (a) Data used. (1) To calculate the prospective payment rates, CMS uses— (i...

  17. 42 CFR 413.337 - Methodology for calculating the prospective payment rates.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... excluded from the data base used to compute the Federal payment rates. In addition, allowable costs related to exceptions payments under § 413.30(f) are excluded from the data base used to compute the Federal... prospective payment rates. (a) Data used. (1) To calculate the prospective payment rates, CMS uses— (i...

  18. 42 CFR 413.337 - Methodology for calculating the prospective payment rates.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... excluded from the data base used to compute the Federal payment rates. In addition, allowable costs related to exceptions payments under § 413.30(f) are excluded from the data base used to compute the Federal... prospective payment rates. (a) Data used. (1) To calculate the prospective payment rates, CMS uses— (i...

  19. Progress in Machine Learning Studies for the CMS Computing Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo

    Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.

  20. Progress in Machine Learning Studies for the CMS Computing Infrastructure

    DOE PAGES

    Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo; ...

    2017-12-06

    Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.

  1. 42 CFR 417.588 - Computation of adjusted average per capita cost (AAPCC).

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... when it makes the determination. (2) Enrollment. CMS makes a further adjustment to remove the cost...) Age, sex, and disability status. CMS makes adjustments to reflect the age and sex distribution and the...

  2. 42 CFR 417.588 - Computation of adjusted average per capita cost (AAPCC).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... when it makes the determination. (2) Enrollment. CMS makes a further adjustment to remove the cost...) Age, sex, and disability status. CMS makes adjustments to reflect the age and sex distribution and the...

  3. 12 CFR 567.0 - Scope.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...)(2) and 567.8 with tier 1 capital, as computed under sections 11 and 12 of Appendix C of this part... applies to all savings associations, except as described in paragraph (b) of this section. (b)(1) A... § 567.11, which supplement the reservations of authority at section 1 of Appendix C of this part. [72 FR...

  4. Effects of physician payment reform on provision of home dialysis.

    PubMed

    Erickson, Kevin F; Winkelmayer, Wolfgang C; Chertow, Glenn M; Bhattacharya, Jay

    2016-06-01

    Patients with end-stage renal disease can receive dialysis at home or in-center. In 2004, CMS reformed physician payment for in-center hemodialysis care from a capitated to a tiered fee-for-service model, augmenting physician payment for frequent in-center visits. We evaluated whether payment reform influenced dialysis modality assignment. Cohort study of patients starting dialysis in the United States in the 3 years before and the 3 years after payment reform. We conducted difference-in-difference analyses comparing patients with traditional Medicare coverage (who were affected by the policy) to others with Medicare Advantage (who were unaffected by the policy). We also examined whether the policy had a more pronounced influence on dialysis modality assignment in areas with lower costs of traveling to dialysis facilities. Patients with traditional Medicare coverage experienced a 0.7% (95% CI, 0.2%-1.1%; P = .003) reduction in the absolute probability of home dialysis use following payment reform compared with patients with Medicare Advantage. Patients living in areas with larger dialysis facilities (where payment reform made in-center hemodialysis comparatively more lucrative for physicians) experienced a 0.9% (95% CI, 0.5%-1.4%; P < .001) reduction in home dialysis use following payment reform compared with patients living in areas with smaller facilities (where payment reform made in-center hemodialysis comparatively less lucrative for physicians). The transition from a capitated to a tiered fee-for-service payment model for in-center hemodialysis care resulted in fewer patients receiving home dialysis. This area of policy failure highlights the importance of considering unintended consequences of future physician payment reform efforts.

  5. WLCG scale testing during CMS data challenges

    NASA Astrophysics Data System (ADS)

    Gutsche, O.; Hajdu, C.

    2008-07-01

    The CMS computing model to process and analyze LHC collision data follows a data-location driven approach and is using the WLCG infrastructure to provide access to GRID resources. As a preparation for data taking, CMS tests its computing model during dedicated data challenges. An important part of the challenges is the test of the user analysis which poses a special challenge for the infrastructure with its random distributed access patterns. The CMS Remote Analysis Builder (CRAB) handles all interactions with the WLCG infrastructure transparently for the user. During the 2006 challenge, CMS set its goal to test the infrastructure at a scale of 50,000 user jobs per day using CRAB. Both direct submissions by individual users and automated submissions by robots were used to achieve this goal. A report will be given about the outcome of the user analysis part of the challenge using both the EGEE and OSG parts of the WLCG. In particular, the difference in submission between both GRID middlewares (resource broker vs. direct submission) will be discussed. In the end, an outlook for the 2007 data challenge is given.

  6. Multi-Dimensional Optimization for Cloud Based Multi-Tier Applications

    ERIC Educational Resources Information Center

    Jung, Gueyoung

    2010-01-01

    Emerging trends toward cloud computing and virtualization have been opening new avenues to meet enormous demands of space, resource utilization, and energy efficiency in modern data centers. By being allowed to host many multi-tier applications in consolidated environments, cloud infrastructure providers enable resources to be shared among these…

  7. The Role of Computational Modeling and Simulation in the Total Product Life Cycle of Peripheral Vascular Devices

    PubMed Central

    Morrison, Tina M.; Dreher, Maureen L.; Nagaraja, Srinidhi; Angelone, Leonardo M.; Kainz, Wolfgang

    2018-01-01

    The total product life cycle (TPLC) of medical devices has been defined by four stages: discovery and ideation, regulatory decision, product launch, and postmarket monitoring. Manufacturers of medical devices intended for use in the peripheral vasculature, such as stents, inferior vena cava (IVC) filters, and stent-grafts, mainly use computational modeling and simulation (CM&S) to aid device development and design optimization, supplement bench testing for regulatory decisions, and assess postmarket changes or failures. For example, computational solid mechanics and fluid dynamics enable the investigation of design limitations in the ideation stage. To supplement bench data in regulatory submissions, manufactures can evaluate the effects of anatomical characteristics and expected in vivo loading environment on device performance. Manufacturers might also harness CM&S to aid root-cause analyses that are necessary when failures occur postmarket, when the device is exposed to broad clinical use. Once identified, CM&S tools can then be used for redesign to address the failure mode and re-establish the performance profile with the appropriate models. The Center for Devices and Radiological Health (CDRH) wants to advance the use of CM&S for medical devices and supports the development of virtual physiological patients, clinical trial simulations, and personalized medicine. Thus, the purpose of this paper is to describe specific examples of how CM&S is currently used to support regulatory submissions at different phases of the TPLC and to present some of the stakeholder-led initiatives for advancing CM&S for regulatory decision-making. PMID:29479395

  8. The Role of Computational Modeling and Simulation in the Total Product Life Cycle of Peripheral Vascular Devices.

    PubMed

    Morrison, Tina M; Dreher, Maureen L; Nagaraja, Srinidhi; Angelone, Leonardo M; Kainz, Wolfgang

    2017-01-01

    The total product life cycle (TPLC) of medical devices has been defined by four stages: discovery and ideation, regulatory decision, product launch, and postmarket monitoring. Manufacturers of medical devices intended for use in the peripheral vasculature, such as stents, inferior vena cava (IVC) filters, and stent-grafts, mainly use computational modeling and simulation (CM&S) to aid device development and design optimization, supplement bench testing for regulatory decisions, and assess postmarket changes or failures. For example, computational solid mechanics and fluid dynamics enable the investigation of design limitations in the ideation stage. To supplement bench data in regulatory submissions, manufactures can evaluate the effects of anatomical characteristics and expected in vivo loading environment on device performance. Manufacturers might also harness CM&S to aid root-cause analyses that are necessary when failures occur postmarket, when the device is exposed to broad clinical use. Once identified, CM&S tools can then be used for redesign to address the failure mode and re-establish the performance profile with the appropriate models. The Center for Devices and Radiological Health (CDRH) wants to advance the use of CM&S for medical devices and supports the development of virtual physiological patients, clinical trial simulations, and personalized medicine. Thus, the purpose of this paper is to describe specific examples of how CM&S is currently used to support regulatory submissions at different phases of the TPLC and to present some of the stakeholder-led initiatives for advancing CM&S for regulatory decision-making.

  9. Using the CMS threaded framework in a production environment

    DOE PAGES

    Jones, C. D.; Contreras, L.; Gartung, P.; ...

    2015-12-23

    During 2014, the CMS Offline and Computing Organization completed the necessary changes to use the CMS threaded framework in the full production environment. We will briefly discuss the design of the CMS Threaded Framework, in particular how the design affects scaling performance. We will then cover the effort involved in getting both the CMSSW application software and the workflow management system ready for using multiple threads for production. Finally, we will present metrics on the performance of the application and workflow system as well as the difficulties which were uncovered. As a result, we will end with CMS' plans formore » using the threaded framework to do production for LHC Run 2.« less

  10. A comparison of Tier 1 and Tier 3 medical homes under Oklahoma Medicaid program.

    PubMed

    Kumar, Jay I; Anthony, Melody; Crawford, Steven A; Arky, Ronald A; Bitton, Asaf; Splinter, Garth L

    2014-04-01

    The patient-centered medical home (PCMH) is a team-based model of care that seeks to improve quality of care and control costs. The Oklahoma Health Care Authority (OHCA) directs Oklahoma's Medicaid program and contracts with 861 medical home practices across the state in one of three tiers of operational capacity: Tier 1 (Basic), Tier 2 (Advanced) and Tier 3 (Optimal). Only 13.5% (n = 116) homes are at the optimal level; the majority (59%, n = 508) at the basic level. In this study, we sought to determine the barriers that prevented Tier 1 homes from advancing to Tier 3 level and the incentives that would motivate providers to advance from Tier 1 to 3. Our hypotheses were that Tier 1 medical homes were located in smaller practices with limited resources and the providers are not convinced that the expense of advancing from Tier 1 status to Tier 3 status was worth the added value. We analyzed OHCA records to compare the 508 Tier 1 (entry-level) with 116 Tier 3 (optimal) medical homes for demographic differences with regards to location: urban or rural, duration as medical home, percentage of contracts that were group contracts, number of providers per group contract, panel age range, panel size, and member-provider ratio. We surveyed all 508 Tier 1 homes with a mail-in survey, and with focused follow up visits to identify the barriers to, and incentives for, upgrading from Tier 1 to Tier 2 or 3. We found that Tier 1 homes were more likely to be in rural areas, run by solo practitioners, serve exclusively adult panels, have smaller panel sizes, and have higher member-to-provider ratios in comparison with Tier 3 homes. Our survey had a 35% response rate. Results showed that the most difficult changes for Tier 1 homes to implement were providing 4 hours of after-hours care and a dedicated program for mental illness and substance abuse. The results also showed that the most compelling incentives for encouraging Tier 1 homes to upgrade their tier status were less"red tape"with prior authorizations, higher pay, and help with panel member follow-up. Multiple interventions may help medical homes in Oklahoma advance from the basic to the optimal level such as sharing of resources among nearby practices, expansion of OHCA online resources to help with preauthorizations and patient follow up, and the generation and transmission of data on the benefits of medical homes.

  11. Impact of Reimbursement Cuts on the Sustainability and Accessibility of Dopamine Transporter Imaging.

    PubMed

    Covington, Matthew F; McMillan, Natalie A; Kuo, Phillip H

    2016-09-01

    Dopamine transporter single-photon emission computed tomography imaging utilizing iodine-123 ioflupane is accurate for differentiation of Parkinson disease from essential tremor. This study evaluates how reimbursement for I-123 ioflupane imaging changed between 2011 (year of FDA approval) and 2014 (year after loss of pass-through status for hospital-based outpatient imaging from CMS). I-123 ioflupane reimbursement data for our institution's hospital-based imaging were compared between two periods: (1) July 2011 to October 2012, and (2) 2014. For each time period separately and in combination, averages and ranges of reimbursement for private insurance and CMS were analyzed and compared. A model to ensure recouping of radiopharmaceutical costs was developed. Review yielded 247 studies from July 2011 to October 2012 and 94 studies from 2014. Average reimbursement per study fell from $2,469 (US dollars) in 2011 to 2012 to $1,657 in 2014. CMS reduced average reimbursement by $1,148 in 2014 because of loss of radiopharmaceutical pass-through status. Average reimbursements from CMS versus private payors markedly differed in 2011 to 2012 at $2,266 versus $2,861, respectively, and in 2014 at $1,118 versus $3,470, respectively. Between 2011 to 2012 and 2014, the CMS percentage increased from 54% to 78%. Assuming that I-123 ioflupane cost $2,000, our model based on 2014 data predicts a practice with greater than 60% CMS patients would no longer recover radiopharmaceutical costs. Reimbursement levels, payor mix, scanner location, and radiopharmaceutical costs are all critical, variable factors for modeling the financial viability of I-123 ioflupane imaging and, by extrapolation, future radiopharmaceuticals. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  12. Cardiometabolic Syndrome in People With Spinal Cord Injury/Disease: Guideline-Derived and Nonguideline Risk Components in a Pooled Sample.

    PubMed

    Nash, Mark S; Tractenberg, Rochelle E; Mendez, Armando J; David, Maya; Ljungberg, Inger H; Tinsley, Emily A; Burns-Drecq, Patricia A; Betancourt, Luisa F; Groah, Suzanne L

    2016-10-01

    To assess cardiometabolic syndrome (CMS) risk definitions in spinal cord injury/disease (SCI/D). Cross-sectional analysis of a pooled sample. Two SCI/D academic medical and rehabilitation centers. Baseline data from subjects in 7 clinical studies were pooled; not all variables were collected in all studies; therefore, participant numbers varied from 119 to 389. The pooled sample included men (79%) and women (21%) with SCI/D >1 year at spinal cord levels spanning C3-T2 (American Spinal Injury Association Impairment Scale [AIS] grades A-D). Not applicable. We computed the prevalence of CMS using the American Heart Association/National Heart, Lung, and Blood Institute guideline (CMS diagnosis as sum of risks ≥3 method) for the following risk components: overweight/obesity, insulin resistance, hypertension, and dyslipidemia. We compared this prevalence with the risk calculated from 2 routinely used nonguideline CMS risk assessments: (1) key cut scores identifying insulin resistance derived from the homeostatic model 2 (HOMA2) method or quantitative insulin sensitivity check index (QUICKI), and (2) a cardioendocrine risk ratio based on an inflammation (C-reactive protein [CRP])-adjusted total cholesterol/high-density lipoprotein cholesterol ratio. After adjustment for multiple comparisons, injury level and AIS grade were unrelated to CMS or risk factors. Of the participants, 13% and 32.1% had CMS when using the sum of risks or HOMA2/QUICKI model, respectively. Overweight/obesity and (pre)hypertension were highly prevalent (83% and 62.1%, respectively), with risk for overweight/obesity being significantly associated with CMS diagnosis (sum of risks, χ(2)=10.105; adjusted P=.008). Insulin resistance was significantly associated with CMS when using the HOMA2/QUICKI model (χ(2)2=21.23, adjusted P<.001). Of the subjects, 76.4% were at moderate to high risk from elevated CRP, which was significantly associated with CMS determination (both methods; sum of risks, χ(2)2=10.198; adjusted P=.048 and HOMA2/QUICKI, χ(2)2=10.532; adjusted P=.04). As expected, guideline-derived CMS risk factors were prevalent in individuals with SCI/D. Overweight/obesity, hypertension, and elevated CRP were common in SCI/D and, because they may compound risks associated with CMS, should be considered population-specific risk determinants. Heightened surveillance for risk, and adoption of healthy living recommendations specifically directed toward weight reduction, hypertension management, and inflammation control, should be incorporated as a priority for disease prevention and management. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  13. A comparison between physicians and computer algorithms for form CMS-2728 data reporting.

    PubMed

    Malas, Mohammed Said; Wish, Jay; Moorthi, Ranjani; Grannis, Shaun; Dexter, Paul; Duke, Jon; Moe, Sharon

    2017-01-01

    CMS-2728 form (Medical Evidence Report) assesses 23 comorbidities chosen to reflect poor outcomes and increased mortality risk. Previous studies questioned the validity of physician reporting on forms CMS-2728. We hypothesize that reporting of comorbidities by computer algorithms identifies more comorbidities than physician completion, and, therefore, is more reflective of underlying disease burden. We collected data from CMS-2728 forms for all 296 patients who had incident ESRD diagnosis and received chronic dialysis from 2005 through 2014 at Indiana University outpatient dialysis centers. We analyzed patients' data from electronic medical records systems that collated information from multiple health care sources. Previously utilized algorithms or natural language processing was used to extract data on 10 comorbidities for a period of up to 10 years prior to ESRD incidence. These algorithms incorporate billing codes, prescriptions, and other relevant elements. We compared the presence or unchecked status of these comorbidities on the forms to the presence or absence according to the algorithms. Computer algorithms had higher reporting of comorbidities compared to forms completion by physicians. This remained true when decreasing data span to one year and using only a single health center source. The algorithms determination was well accepted by a physician panel. Importantly, algorithms use significantly increased the expected deaths and lowered the standardized mortality ratios. Using computer algorithms showed superior identification of comorbidities for form CMS-2728 and altered standardized mortality ratios. Adapting similar algorithms in available EMR systems may offer more thorough evaluation of comorbidities and improve quality reporting. © 2016 International Society for Hemodialysis.

  14. Method and system for knowledge discovery using non-linear statistical analysis and a 1st and 2nd tier computer program

    DOEpatents

    Hively, Lee M [Philadelphia, TN

    2011-07-12

    The invention relates to a method and apparatus for simultaneously processing different sources of test data into informational data and then processing different categories of informational data into knowledge-based data. The knowledge-based data can then be communicated between nodes in a system of multiple computers according to rules for a type of complex, hierarchical computer system modeled on a human brain.

  15. A Non-Equilibrium Sediment Transport Model for Coastal Inlets and Navigation Channels

    DTIC Science & Technology

    2011-01-01

    exchange of water , sediment, and nutrients between estuaries and the ocean. Because of the multiple interacting forces (waves, wind, tide, river...in parallel using OpenMP. The CMS takes advantage of the Surface- water Modeling System (SMS) interface for grid generation and model setup, as well...as for plotting and post- processing (Zundel, 2000). The circulation model in the CMS (called CMS-Flow) computes the unsteady water level and

  16. 24 CFR 203.605 - Loss mitigation performance.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... performance. (1) HUD will measure and advise mortgagees of their loss mitigation performance through the Tier... mitigation attempts, defaults, and claims. Based on the ratios, HUD will group mortgagees in four tiers (Tiers 1, 2, 3, and 4), with Tier 1 representing the highest or best ranking mortgagees and Tier 4...

  17. The complications and the position of the Codman MicroSensor™ ICP device: an analysis of 549 patients and 650 Sensors.

    PubMed

    Koskinen, Lars-Owe D; Grayson, David; Olivecrona, Magnus

    2013-11-01

    Complications of and insertion depth of the Codman MicroSensor ICP monitoring device (CMS) is not well studied. To study complications and the insertion depth of the CMS in a clinical setting. We identified all patients who had their intracranial pressure (ICP) monitored using a CMS device between 2002 and 2010. The medical records and post implantation computed tomography (CT) scans were analyzed for occurrence of infection, hemorrhage and insertion depth. In all, 549 patients were monitored using 650 CMS. Mean monitoring time was 7.0 ± 4.9 days. The mean implantation depth was 21.3 ± 11.1 mm (0-88 mm). In 27 of the patients, a haematoma was identified; 26 of these were less than 1 ml, and one was 8 ml. No clinically significant bleeding was found. There was no statistically significant increase in the number of hemorrhages in presumed coagulopathic patients. The infection rate was 0.6 % and the calculated infection rate per 1,000 catheter days was 0.8. The risk for hemorrhagic and infectious complications when using the CMS for ICP monitoring is low. The depth of insertion varies considerably and should be taken into account if patients are treated with head elevation, since the pressure is measured at the tip of the sensor. To meet the need for ICP monitoring, an intraparenchymal ICP monitoring device should be preferred to the use of an external ventricular drainage (EVD).

  18. 40 CFR 1043.50 - Approval of methods to meet Tier 1 retrofit NOX standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Approval of methods to meet Tier 1... SUBJECT TO THE MARPOL PROTOCOL § 1043.50 Approval of methods to meet Tier 1 retrofit NOX standards... enable Pre-Tier 1 engines to meet the Tier 1 NOX standard of regulation 13 of Annex VI. Any person may...

  19. 40 CFR 1033.102 - Transition to the standards of this part.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Tier 0 and Tier 1 standards of § 1033.101 apply for new locomotives beginning January 1, 2010, except as specified in § 1033.150(a). The Tier 0 and Tier 1 standards of 40 CFR part 92 apply for earlier... locomotives beginning January 1, 2013. The Tier 2 standards of 40 CFR part 92 apply for earlier model years...

  20. Lessons learned from the ATLAS performance studies of the Iberian Cloud for the first LHC running period

    NASA Astrophysics Data System (ADS)

    Sánchez-Martínez, V.; Borges, G.; Borrego, C.; del Peso, J.; Delfino, M.; Gomes, J.; González de la Hoz, S.; Pacheco Pages, A.; Salt, J.; Sedov, A.; Villaplana, M.; Wolters, H.

    2014-06-01

    In this contribution we describe the performance of the Iberian (Spain and Portugal) ATLAS cloud during the first LHC running period (March 2010-January 2013) in the context of the GRID Computing and Data Distribution Model. The evolution of the resources for CPU, disk and tape in the Iberian Tier-1 and Tier-2s is summarized. The data distribution over all ATLAS destinations is shown, focusing on the number of files transferred and the size of the data. The status and distribution of simulation and analysis jobs within the cloud are discussed. The Distributed Analysis tools used to perform physics analysis are explained as well. Cloud performance in terms of the availability and reliability of its sites is discussed. The effect of the changes in the ATLAS Computing Model on the cloud is analyzed. Finally, the readiness of the Iberian Cloud towards the first Long Shutdown (LS1) is evaluated and an outline of the foreseen actions to take in the coming years is given. The shutdown will be a good opportunity to improve and evolve the ATLAS Distributed Computing system to prepare for the future challenges of the LHC operation.

  1. 40 CFR 92.305 - Credit generation and use calculation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... and Tier 1 PM line-haul credits; Std=0.59 g/kW-hr, for Tier 0 and Tier 1 PM switch credits; and Std.... For Tier 1 and Tier 2 engine families, the FEL may not exceed the limit established in § 92.304(k) for...). Consistent units are to be used throughout the calculation. (1) When useful life is expressed in terms of...

  2. Genetic and economic analyses of female replacement rates in the dam-daughter pathway of a hierarchical swine breeding structure.

    PubMed

    Faust, M A; Robison, O W; Tess, M W

    1992-07-01

    A stochastic life-cycle swine production model was used to study the effect of female replacement rates in the dam-daughter pathway for a tiered breeding structure on genetic change and returns to the breeder. Genetic, environmental, and economic parameters were used to simulate characteristics of individual pigs in a system producing F1 female replacements. Evaluated were maximum culling ages for nucleus and multiplier tier sows. System combinations included one- and five-parity alternatives for both levels and 10-parity options for the multiplier tier. Yearly changes and average phenotypic levels were computed for performance and economic measures. Generally, at the nucleus level, responses to 10 yr of selection for sow and pig performance in five-parity herds were 70 to 85% of response in one-parity herds. Similarly, the highest selection responses in multiplier herds were from systems with one-parity nucleus tiers. Responses in these were typically greater than 115% of the response for systems with the smallest yearly change, namely, the five-parity nucleus and five- and 10-parity multiplier levels. In contrast, the most profitable multiplier tiers (10-parity) had the lowest replacement costs. Within a multiplier culling strategy, rapid genetic change was desirable. Differences between systems that culled after five or 10 parities were smaller than differences between five- and one-parity multiplier options. To recover production costs, systems with the lowest returns required 140% of market hog value for gilts available to commercial tiers, whereas more economically efficient systems required no premium.

  3. GPUs for statistical data analysis in HEP: a performance study of GooFit on GPUs vs. RooFit on CPUs

    NASA Astrophysics Data System (ADS)

    Pompili, Alexis; Di Florio, Adriano; CMS Collaboration

    2016-10-01

    In order to test the computing capabilities of GPUs with respect to traditional CPU cores a high-statistics toy Monte Carlo technique has been implemented both in ROOT/RooFit and GooFit frameworks with the purpose to estimate the statistical significance of the structure observed by CMS close to the kinematical boundary of the Jψϕ invariant mass in the three-body decay B +→JψϕK +. GooFit is a data analysis open tool under development that interfaces ROOT/RooFit to CUDA platform on nVidia GPU. The optimized GooFit application running on GPUs hosted by servers in the Bari Tier2 provides striking speed-up performances with respect to the RooFit application parallelised on multiple CPUs by means of PROOF-Lite tool. The considerably resulting speed-up, while comparing concurrent GooFit processes allowed by CUDA Multi Process Service and a RooFit/PROOF-Lite process with multiple CPU workers, is presented and discussed in detail. By means of GooFit it has also been possible to explore the behaviour of a likelihood ratio test statistic in different situations in which the Wilks Theorem may apply or does not apply because its regularity conditions are not satisfied.

  4. Statistical significance estimation of a signal within the GooFit framework on GPUs

    NASA Astrophysics Data System (ADS)

    Cristella, Leonardo; Di Florio, Adriano; Pompili, Alexis

    2017-03-01

    In order to test the computing capabilities of GPUs with respect to traditional CPU cores a high-statistics toy Monte Carlo technique has been implemented both in ROOT/RooFit and GooFit frameworks with the purpose to estimate the statistical significance of the structure observed by CMS close to the kinematical boundary of the J/ψϕ invariant mass in the three-body decay B+ → J/ψϕK+. GooFit is a data analysis open tool under development that interfaces ROOT/RooFit to CUDA platform on nVidia GPU. The optimized GooFit application running on GPUs hosted by servers in the Bari Tier2 provides striking speed-up performances with respect to the RooFit application parallelised on multiple CPUs by means of PROOF-Lite tool. The considerable resulting speed-up, evident when comparing concurrent GooFit processes allowed by CUDA Multi Process Service and a RooFit/PROOF-Lite process with multiple CPU workers, is presented and discussed in detail. By means of GooFit it has also been possible to explore the behaviour of a likelihood ratio test statistic in different situations in which the Wilks Theorem may or may not apply because its regularity conditions are not satisfied.

  5. Performance studies of GooFit on GPUs vs RooFit on CPUs while estimating the statistical significance of a new physical signal

    NASA Astrophysics Data System (ADS)

    Di Florio, Adriano

    2017-10-01

    In order to test the computing capabilities of GPUs with respect to traditional CPU cores a high-statistics toy Monte Carlo technique has been implemented both in ROOT/RooFit and GooFit frameworks with the purpose to estimate the statistical significance of the structure observed by CMS close to the kinematical boundary of the J/ψϕ invariant mass in the three-body decay B + → J/ψϕK +. GooFit is a data analysis open tool under development that interfaces ROOT/RooFit to CUDA platform on nVidia GPU. The optimized GooFit application running on GPUs hosted by servers in the Bari Tier2 provides striking speed-up performances with respect to the RooFit application parallelised on multiple CPUs by means of PROOF-Lite tool. The considerable resulting speed-up, evident when comparing concurrent GooFit processes allowed by CUDA Multi Process Service and a RooFit/PROOF-Lite process with multiple CPU workers, is presented and discussed in detail. By means of GooFit it has also been possible to explore the behaviour of a likelihood ratio test statistic in different situations in which the Wilks Theorem may or may not apply because its regularity conditions are not satisfied.

  6. 20 CFR 226.16 - Supplemental annuity.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Supplemental annuity. 226.16 Section 226.16... EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing an Employee Annuity § 226.16 Supplemental annuity. A supplemental annuity is payable in addition to tiers I and II and the vested dual benefit to an...

  7. A gene expression biomarker accurately predicts estrogen receptor α modulation in a human gene expression compendium

    EPA Science Inventory

    The EPA’s vision for the Endocrine Disruptor Screening Program (EDSP) in the 21st Century (EDSP21) includes utilization of high-throughput screening (HTS) assays coupled with computational modeling to prioritize chemicals with the goal of eventually replacing current Tier 1...

  8. Electron-Ion Recombination Rate Coefficient Measurements in a Flowing Afterglow Plasma

    NASA Technical Reports Server (NTRS)

    Gougousi, Theodosia; Golde, Michael F.; Johnsen, Rainer

    1996-01-01

    The flowing-afterglow technique in conjunction with computer modeling of the flowing plasma has been used to determine accurate dissociative-recombination rate coefficients alpha for the ions O2(+), HCO(+), CH5(+), C2H5(+), H3O(+), CO2(+), HCO2(+), HN2O(+), and N2O(+) at 295 K. We find that the simple form of data analysis that was employed in earlier experiments was adequate and we largely confirm earlier results. In the case of HCO(+) ions, published coefficients range from 1.1 X 10(exp -7) to 2.8 x 10(exp -7) cu cm/S, while our measurements give a value of 1.9 x 10(exp -7) cu cm/S.

  9. Virtual pools for interactive analysis and software development through an integrated Cloud environment

    NASA Astrophysics Data System (ADS)

    Grandi, C.; Italiano, A.; Salomoni, D.; Calabrese Melcarne, A. K.

    2011-12-01

    WNoDeS, an acronym for Worker Nodes on Demand Service, is software developed at CNAF-Tier1, the National Computing Centre of the Italian Institute for Nuclear Physics (INFN) located in Bologna. WNoDeS provides on demand, integrated access to both Grid and Cloud resources through virtualization technologies. Besides the traditional use of computing resources in batch mode, users need to have interactive and local access to a number of systems. WNoDeS can dynamically select these computers instantiating Virtual Machines, according to the requirements (computing, storage and network resources) of users through either the Open Cloud Computing Interface API, or through a web console. An interactive use is usually limited to activities in user space, i.e. where the machine configuration is not modified. In some other instances the activity concerns development and testing of services and thus implies the modification of the system configuration (and, therefore, root-access to the resource). The former use case is a simple extension of the WNoDeS approach, where the resource is provided in interactive mode. The latter implies saving the virtual image at the end of each user session so that it can be presented to the user at subsequent requests. This work describes how the LHC experiments at INFN-Bologna are testing and making use of these dynamically created ad-hoc machines via WNoDeS to support flexible, interactive analysis and software development at the INFN Tier-1 Computing Centre.

  10. Potential of One-to-One Technology Uses and Pedagogical Practices: Student Agency and Participation in an Economically Disadvantaged Eighth Grade

    ERIC Educational Resources Information Center

    Andrade Johnson, Maria Dulce Silva

    2017-01-01

    The accelerated growth of 1:1 educational computing initiatives has challenged digital equity with a three-tiered, socioeconomic digital divide: (a) access, (b) higher order uses, and (c) user empowerment and personalization. As the access gap has been closing, the exponential increase of 1:1 devices threatens to widen the second and third digital…

  11. Expansion of Enterprise Requirements and Acquisition Model

    DTIC Science & Technology

    2012-06-04

    upgrades in technology that made it more lethal with a smaller force. Computer technology, GPS, and stealth are just a few examples that allowed...The facility consists of banks of networked computers , large displays all built around a centralized workspace. It can be seen in Figure 3. The...first was to meet a gap in UHF satellite communciations for the Navy. This was satisfied as a Tier-1 program by purchasing additional bandwidth

  12. Dynamic partitioning as a way to exploit new computing paradigms: the cloud use case.

    NASA Astrophysics Data System (ADS)

    Ciaschini, Vincenzo; Dal Pra, Stefano; dell'Agnello, Luca

    2015-12-01

    The WLCG community and many groups in the HEP community have based their computing strategy on the Grid paradigm, which proved successful and still ensures its goals. However, Grid technology has not spread much over other communities; in the commercial world, the cloud paradigm is the emerging way to provide computing services. WLCG experiments aim to achieve integration of their existing current computing model with cloud deployments and take advantage of the so-called opportunistic resources (including HPC facilities) which are usually not Grid compliant. One missing feature in the most common cloud frameworks, is the concept of job scheduler, which plays a key role in a traditional computing centre, by enabling a fairshare based access at the resources to the experiments in a scenario where demand greatly outstrips availability. At CNAF we are investigating the possibility to access the Tier-1 computing resources as an OpenStack based cloud service. The system, exploiting the dynamic partitioning mechanism already being used to enable Multicore computing, allowed us to avoid a static splitting of the computing resources in the Tier-1 farm, while permitting a share friendly approach. The hosts in a dynamically partitioned farm may be moved to or from the partition, according to suitable policies for request and release of computing resources. Nodes being requested in the partition switch their role and become available to play a different one. In the cloud use case hosts may switch from acting as Worker Node in the Batch system farm to cloud compute node member, made available to tenants. In this paper we describe the dynamic partitioning concept, its implementation and integration with our current batch system, LSF.

  13. 30 CFR 57.5067 - Engines.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) light duty truck 0.1 g/mile. 40 CFR 86.094-11(a)(1)(iv)(B) heavy duty highway engine 0.1 g/bhp-hr. 40... g/bhp-hr). tier 1 8≤kWbhp-hr). tier 1 19≤kWbhp-hr). tier 2 37≤kWbhp-hr). tier 2 75≤kW<130...

  14. Development, deployment and operations of ATLAS databases

    NASA Astrophysics Data System (ADS)

    Vaniachine, A. V.; Schmitt, J. G. v. d.

    2008-07-01

    In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services.

  15. Critical thinking traits of top-tier experts and implications for computer science education

    NASA Astrophysics Data System (ADS)

    Bushey, Dean E.

    A documented shortage of technical leadership and top-tier performers in computer science jeopardizes the technological edge, security, and economic well-being of the nation. The 2005 President's Information and Technology Advisory Committee (PITAC) Report on competitiveness in computational sciences highlights the major impact of science, technology, and innovation in keeping America competitive in the global marketplace. It stresses the fact that the supply of science, technology, and engineering experts is at the core of America's technological edge, national competitiveness and security. However, recent data shows that both undergraduate and postgraduate production of computer scientists is falling. The decline is "a quiet crisis building in the United States," a crisis that, if allowed to continue unchecked, could endanger America's well-being and preeminence among the world's nations. Past research on expert performance has shown that the cognitive traits of critical thinking, creativity, and problem solving possessed by top-tier performers can be identified, observed and measured. The studies show that the identified attributes are applicable across many domains and disciplines. Companies have begun to realize that cognitive skills are important for high-level performance and are reevaluating the traditional academic standards they have used to predict success for their top-tier performers in computer science. Previous research in the computer science field has focused either on programming skills of its experts or has attempted to predict the academic success of students at the undergraduate level. This study, on the other hand, examines the critical-thinking skills found among experts in the computer science field in order to explore the questions, "What cognitive skills do outstanding performers possess that make them successful?" and "How do currently used measures of academic performance correlate to critical-thinking skills among students?" The results of this study suggest a need to examine how critical-thinking abilities are learned in the undergraduate computer science curriculum and the need to foster these abilities in order to produce the high-level, critical-thinking professionals necessary to fill the growing need for these experts. Due to the fact that current measures of academic performance do not adequately depict students' cognitive abilities, assessment of these skills must be incorporated into existing curricula.

  16. Accelerating chronically unresponsive children to tier 3 instruction: what level of data is necessary to ensure selection accuracy?

    PubMed

    Compton, Donald L; Gilbert, Jennifer K; Jenkins, Joseph R; Fuchs, Douglas; Fuchs, Lynn S; Cho, Eunsoo; Barquero, Laura A; Bouton, Bobette

    2012-01-01

    Response-to-intervention (RTI) approaches to disability identification are meant to put an end to the so-called wait-to-fail requirement associated with IQ discrepancy. However, in an unfortunate irony, there is a group of children who wait to fail in RTI frameworks. That is, they must fail both general classroom instruction (Tier 1) and small-group intervention (Tier 2) before becoming eligible for the most intensive intervention (Tier 3). The purpose of this article was to determine how to predict accurately which at-risk children will be unresponsive to Tiers 1 and 2, thereby allowing unresponsive children to move directly from Tier 1 to Tier 3. As part of an efficacy study of a multitier RTI approach to prevention and identification of reading disabilities (RD), 129 first-grade children who were unresponsive to classroom reading instruction were randomly assigned to 14 weeks of small-group, Tier 2 intervention. Nonresponders to this instruction (n = 33) were identified using local norms on first-grade word identification fluency growth linked to a distal outcome of RD at the end of second grade. Logistic regression models were used to predict membership in responder and nonresponder groups. Predictors were entered as blocks of data from least to most difficult to obtain: universal screening data, Tier 1 response data, norm referenced tests, and Tier 2 response data. Tier 2 response data were not necessary to classify students as responders and nonresponders to Tier 2 instruction, suggesting that some children can be accurately identified as eligible for Tier 3 intervention using only Tier 1 data, thereby avoiding prolonged periods of failure to instruction.

  17. Acute tier-1 and tier-2 effect assessment approaches in the EFSA Aquatic Guidance Document: are they sufficiently protective for insecticides?

    PubMed

    van Wijngaarden, René P A; Maltby, Lorraine; Brock, Theo C M

    2015-08-01

    The objective of this paper is to evaluate whether the acute tier-1 and tier-2 methods as proposed by the Aquatic Guidance Document recently published by the European Food Safety Authority (EFSA) are appropriate for deriving regulatory acceptable concentrations (RACs) for insecticides. The tier-1 and tier-2 RACs were compared with RACs based on threshold concentrations from micro/mesocosm studies (ETO-RAC). A lower-tier RAC was considered as sufficiently protective, if less than the corresponding ETO-RAC. ETO-RACs were calculated for repeated (n = 13) and/or single pulsed applications (n = 17) of 26 insecticides to micro/mesocosms, giving a maximum of 30 insecticide × application combinations (i.e. cases) for comparison. Acute tier-1 RACs (for 24 insecticides) were lower than the corresponding ETO-RACs in 27 out of 29 cases, while tier-2 Geom-RACs (for 23 insecticides) were lower in 24 out of 26 cases. The tier-2 SSD-RAC (for 21 insecticides) using HC5 /3 was lower than the ETO-RAC in 23 out of 27 cases, whereas the tier-2 SSD-RAC using HC5 /6 was protective in 25 out of 27 cases. The tier-1 and tier-2 approaches proposed by EFSA for acute effect assessment are sufficiently protective for the majority of insecticides evaluated. Further evaluation may be needed for insecticides with more novel chemistries (neonicotinoids, biopesticides) and compounds that show delayed effects (insect growth regulators). © 2014 Society of Chemical Industry.

  18. Tier 2 Interventions in Positive Behavior Support: A Survey of School Implementation

    ERIC Educational Resources Information Center

    Rodriguez, Billie Jo; Loman, Sheldon L.; Borgmeier, Christopher

    2016-01-01

    As increasing numbers of schools implement Multi-Tiered Systems of Support (MTSS), schools are looking for and implementing evidence-based practices for students whose needs are not fully met by Tier 1 supports. Although there is relative consistency and clarity in what constitutes Tier 1 behavior support within MTSS, Tier 2 supports may be more…

  19. Eurogrid: a new glideinWMS based portal for CDF data analysis

    NASA Astrophysics Data System (ADS)

    Amerio, S.; Benjamin, D.; Dost, J.; Compostella, G.; Lucchesi, D.; Sfiligoi, I.

    2012-12-01

    The CDF experiment at Fermilab ended its Run-II phase on September 2011 after 11 years of operations and 10 fb-1 of collected data. CDF computing model is based on a Central Analysis Farm (CAF) consisting of local computing and storage resources, supported by OSG and LCG resources accessed through dedicated portals. At the beginning of 2011 a new portal, Eurogrid, has been developed to effectively exploit computing and disk resources in Europe: a dedicated farm and storage area at the TIER-1 CNAF computing center in Italy, and additional LCG computing resources at different TIER-2 sites in Italy, Spain, Germany and France, are accessed through a common interface. The goal of this project is to develop a portal easy to integrate in the existing CDF computing model, completely transparent to the user and requiring a minimum amount of maintenance support by the CDF collaboration. In this paper we will review the implementation of this new portal, and its performance in the first months of usage. Eurogrid is based on the glideinWMS software, a glidein based Workload Management System (WMS) that works on top of Condor. As CDF CAF is based on Condor, the choice of the glideinWMS software was natural and the implementation seamless. Thanks to the pilot jobs, user-specific requirements and site resources are matched in a very efficient way, completely transparent to the users. Official since June 2011, Eurogrid effectively complements and supports CDF computing resources offering an optimal solution for the future in terms of required manpower for administration, support and development.

  20. Advanced technologies for scalable ATLAS conditions database access on the grid

    NASA Astrophysics Data System (ADS)

    Basset, R.; Canali, L.; Dimitrov, G.; Girone, M.; Hawkings, R.; Nevski, P.; Valassi, A.; Vaniachine, A.; Viegas, F.; Walker, R.; Wong, A.

    2010-04-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.

  1. Tier Two Interventions Implemented within the Context of a Tiered Prevention Framework

    ERIC Educational Resources Information Center

    Mitchell, Barbara S.; Stormont, Melissa; Gage, Nicholas A.

    2011-01-01

    Despite a growing body of evidence demonstrating the value of Tier 1 and Tier 3 interventions, significantly less is known about Tier 2 level treatments when they are added within the context of a tiered continuum of support. The purpose of this article is to systematically review the existing research base for Tier 2 small group intervention…

  2. Health and performance monitoring of the online computer cluster of CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, G.; et al.

    2012-01-01

    The CMS experiment at the LHC features over 2'500 devices that need constant monitoring in order to ensure proper data taking. The monitoring solution has been migrated from Nagios to Icinga, with several useful plugins. The motivations behind the migration and the selection of the plugins are discussed.

  3. 26 CFR 1.1446-5 - Tiered partnership structures.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 12 2010-04-01 2010-04-01 false Tiered partnership structures. 1.1446-5 Section...-Free Covenant Bonds § 1.1446-5 Tiered partnership structures. (a) In general. The rules of this section... prescribes rules applicable to a publicly traded partnership in a tiered partnership structure. Paragraph (e...

  4. Mapping of the US Domestic Influenza Virologic Surveillance Landscape.

    PubMed

    Jester, Barbara; Schwerzmann, Joy; Mustaquim, Desiree; Aden, Tricia; Brammer, Lynnette; Humes, Rosemary; Shult, Pete; Shahangian, Shahram; Gubareva, Larisa; Xu, Xiyan; Miller, Joseph; Jernigan, Daniel

    2018-07-17

    Influenza virologic surveillance is critical each season for tracking influenza circulation, following trends in antiviral drug resistance, detecting novel influenza infections in humans, and selecting viruses for use in annual seasonal vaccine production. We developed a framework and process map for characterizing the landscape of US influenza virologic surveillance into 5 tiers of influenza testing: outpatient settings (tier 1), inpatient settings and commercial laboratories (tier 2), state public health laboratories (tier 3), National Influenza Reference Center laboratories (tier 4), and Centers for Disease Control and Prevention laboratories (tier 5). During the 2015-16 season, the numbers of influenza tests directly contributing to virologic surveillance were 804,000 in tiers 1 and 2; 78,000 in tier 3; 2,800 in tier 4; and 3,400 in tier 5. With the release of the 2017 US Pandemic Influenza Plan, the proposed framework will support public health officials in modeling, surveillance, and pandemic planning and response.

  5. Continuing Development of Alternative High-Throughput Screens to Determine Endocrine Disruption, Focusing on Androgen Receptor, Steroidogenesis, and Thyroid Pathways

    EPA Science Inventory

    The focus of this meeting is the SAP's review and comment on the Agency's proposed high-throughput computational model of androgen receptor pathway activity as an alternative to the current Tier 1 androgen receptor assay (OCSPP 890.1150: Androgen Receptor Binding Rat Prostate Cyt...

  6. 20 CFR 228.18 - Reduction for public pension.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Reduction for public pension. 228.18 Section... COMPUTATION OF SURVIVOR ANNUITIES The Tier I Annuity Component § 228.18 Reduction for public pension. (a) The... receipt of a public pension. (b) When reduction is required. Unless the survivor annuitant meets one of...

  7. Results of computer assisted mini-incision subvastus approach for total knee arthroplasty.

    PubMed

    Turajane, Thana; Larbpaiboonpong, Viroj; Kongtharvonskul, Jatupon; Maungsiri, Samart

    2009-12-01

    Mini-incision subvastus approach is soft tissue preservation of the knee. Advantages of the mini-incision subvastus approach included reduced blood loss, reduced pain, self rehabilitation and faster recovery. However, the improved visualization, component alignment, and more blood preservation have been debatable to achieve the better outcome and preventing early failure of the Total Knee Arthroplasty (TKA). The computer navigation has been introduced to improve alignment and blood loss. The purpose of this study was to evaluate the short term outcomes of the combination of computer assisted mini-incision subvastus approach for Total Knee Arthroplasty (CMS-TKA). A prospective case series of the initial 80 patients who underwent computer assisted mini-incision subvastus approach for CMS-TKA from January 2007 to October 2008 was carried out. The patients' conditions were classified into 2 groups, the simple OA knee (varus deformity was less than 15 degree, BMI was less than 20%, no associated deformities) and the complex deformity (varus deformity was more than 15 degrees, BMI more was than 20%, associated with flexion contractor). There were 59 patients in group 1 and 21 patients in group 2. Of the 80 knees, 38 were on the left and 42 on the right. The results of CMS-TKA [the mean (range)] in group 1: group 2 were respectively shown as the incision length [10.88 (8-13): 11.92 (10-14], the operation time [118 (111.88-125.12): 131 (119.29-143.71) minutes, lateral releases (0 in both groups), postoperative range of motion in flexion [94.5 (90-100): 95.25 (90-105) degree] and extension [1.75 (0-5): 1.5 (0-5) degree] Blood loss in 24 hours [489.09 (414.7-563.48): 520 (503.46-636.54) ml] and blood transfusion [1 (0-1) unit? in both groups], Tibiofemoral angle preoperative [Varus = 4 (varus 0-10): Varus = 17.14 (varus 15.7-18.5) degree, Tibiofemoral angle postoperative [Valgus = 1.38 (Valgus 0-4): Valgus = 2.85 (valgus 2.1-3.5) degree], Tibiofemoral angle outlier (85% both groups), and Knee society score preoperative and postoperative [64.6 (59.8-69.4) and 93.7 (90.8-96.65)]: 69 (63.6-74.39) 92.36 (88.22-96.5)]. The complications found in both groups were similar. No deep vein thrombosis, no fracture at both femur and tibia, no vascular injury, and no pin tract pain or infection was found in both groups. The computer assisted CMS-TKA) is one of the appropriate procedures for all varus deformity, no limitation with the associated bone loss, flexion contractor, BMI, except the fixed valgus deformity. To ensure the clinical outcomes, multiple key steps were considered as the appropriate techniques for this approach which included the accurate registration, precision bone cut and ligament balances, and the good cement techniques.

  8. 77 FR 9277 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-16

    ... Change Amending the NYSE Arca Equities Fee Schedule Increasing the Indication of Interest Tier 1 Credit and the Tracking Order Tier 1 Credit for ETP Holders and Market Makers February 10, 2012. Pursuant to... Schedule'') to increase the indication of interest (``IOI'') Tier 1 credit and the Tracking Order Tier 1...

  9. 12 CFR 325.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Report.” (m) Leverage ratio means the ratio of Tier 1 capital to total assets, as calculated under this... assets may be included in calculating the bank's Tier 1 capital. (v) Tier 1 capital or core capital means... in excess of the limit set forth in § 325.5(g), minus identified losses (to the extent that Tier 1...

  10. Rankings matter: nurse graduates from higher-ranked institutions have higher productivity.

    PubMed

    Yakusheva, Olga; Weiss, Marianne

    2017-02-13

    Increasing demand for baccalaureate-prepared nurses has led to rapid growth in the number of baccalaureate-granting programs, and to concerns about educational quality and potential effects on productivity of the graduating nursing workforce. We examined the association of individual productivity of a baccalaureate-prepared nurse with the ranking of the degree-granting institution. For a sample of 691 nurses from general medical-surgical units at a large magnet urban hospital between 6/1/2011-12/31/2011, we conducted multivariate regression analysis of nurse productivity on the ranking of the degree-granting institution, adjusted for age, hospital tenure, gender, and unit-specific effects. Nurse productivity was coded as "top"/"average"/"bottom" based on a computation of individual nurse value-added to patient outcomes. Ranking of the baccalaureate-granting institution was derived from the US News and World Report Best Colleges Rankings' categorization of the nurse's institution as the "first tier" or the "second tier", with diploma or associate degree as the reference category. Relative to diploma or associate degree nurses, nurses who had attended first-tier universities had three-times the odds of being in the top productivity category (OR = 3.18, p < 0.001), while second-tier education had a non-significant association with productivity (OR = 1.73, p = 0.11). Being in the bottom productivity category was not associated with having a baccalaureate degree or the quality tier. The productivity boost from a nursing baccalaureate degree depends on the quality of the educational institution. Recognizing differences in educational outcomes, initiatives to build a baccalaureate-educated nursing workforce should be accompanied by improved access to high-quality educational institutions.

  11. Neutralization tiers of HIV-1

    PubMed Central

    Montefiori, David C.; Roederer, Mario; Morris, Lynn; Seaman, Michael S.

    2018-01-01

    Purpose of review HIV-1 isolates are often classified on the basis of neutralization ‘tier’ phenotype. Tier classification has important implications for the monitoring and interpretation of vaccine-elicited neutralizing antibody responses. The molecular basis that distinguishes the multiple neutralization phenotypes of HIV-1 has been unclear. We present a model based on the dynamic nature of the HIV-1 envelope glycoproteins and its impact on epitope exposure. We also describe a new approach for ranking HIV-1 vaccine-elicited neutralizing antibody responses. Recent findings The unliganded trimeric HIV-1 envelope glycoprotein spike spontaneously transitions through at least three conformations. Neutralization tier phenotypes correspond to the frequency by which the trimer exists in a closed (tiers 2 and 3), open (tier 1A), or intermediate (tier 1B) conformation. An increasing number of epitopes become exposed as the trimer opens, making the virus more sensitive to neutralization by certain antibodies. The closed conformation is stabilized by many broadly neutralizing antibodies. Summary The tier 2 neutralization phenotype is typical of most circulating strains and is associated with a predominantly closed Env trimer configuration that is a high priority to target with vaccines. Assays with tier 1A viruses should be interpreted with caution and with the understanding that they detect many antibody specificities that do not neutralize tier 2 viruses and do not protect against HIV-1 infection. PMID:29266013

  12. 40 CFR 1033.135 - Labeling.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Tier 1 and later locomotives. The label on the engine is replaced each time the locomotive is... 0 and Tier 1 locomotives, the label may be made up of more than one piece, as long as all pieces are... to Tier 1+ locomotives.” (4) “This locomotive conforms to U.S. EPA regulations applicable to Tier 2...

  13. How do quality information and cost affect patient choice of provider in a tiered network setting? Results from a survey.

    PubMed

    Sinaiko, Anna D

    2011-04-01

    To assess how quality information from multiple sources and financial incentives affect consumer choice of physicians in tiered physician networks. Survey of a stratified random sample of Massachusetts state employees. Respondents were assigned a hypothetical structure with differential copayments for "Tier 1" (preferred) and "Tier 2" (nonpreferred) physicians. Half of respondents were told they needed to select a cardiologist, and half were told they needed to select a dermatologist. Patients were asked whether they would choose a Tier 1 doctor, a Tier 2 doctor, or had no preference in a case where they had no further quality information, a case where a family member or friend recommended a Tier 2 doctor, and a case where their personal physician recommended a Tier 2 doctor. The effects of copayments, recommendations, physician specialty, and patient characteristics on the reported probability of selecting a Tier 1 doctor are analyzed using multinomial logit and logistic regression. Relative to a case where there is no copayment differential between tiers, copayment differences of U.S.$10-U.S.$35 increase the number of respondents indicating they would select a Tier 1 physician by 3.5-11.7 percent. Simulations suggest copayments must exceed U.S.$300 to counteract the recommendation for a lower tiered physician from friends, family, or a referring physician. Sensitivity to the copayments varied with physician specialty. Tiered provider networks with these copayment levels appear to have limited influence on physician choice when contradicted by other trusted sources. Consumers' response likely varies with physician specialty. © Health Research and Educational Trust.

  14. Cross-species extrapolation of toxicity information using the ...

    EPA Pesticide Factsheets

    In the United States, the Endocrine Disruptor Screening Program (EDSP) was established to identify chemicals that may lead to adverse effects via perturbation of the endocrine system (i.e., estrogen, androgen, and thyroid hormone systems). In the mid-1990s the EDSP adopted a two tiered approach for screening chemicals that applied standardized in vitro and in vivo toxicity tests. The Tier 1 screening assays were designed to identify substances that have the potential of interacting with the endocrine system and Tier 2 testing was developed to identify adverse effects caused by the chemical, with documentation of dose-response relationships. While this tiered approach was effective in identifying possible endocrine disrupting chemicals, the cost and time to screen a single chemical was significant. Therefore, in 2012 the EDSP proposed a transition to make greater use of computational approaches (in silico) and high-throughput screening (HTS; in vitro) assays to more rapidly and cost-efficiently screen chemicals for endocrine activity. This transition from resource intensive, primarily in vivo, screening methods to more pathway-based approaches aligns with the simultaneously occurring transformation in toxicity testing termed “Toxicity Testing in the 21st Century” which shifts the focus to the disturbance of the biological pathway predictive of the observable toxic effects. An example of such screening tools include the US Environmental Protection Agency’s

  15. 40 CFR Appendix I to Part 1042 - Summary of Previous Emission Standards

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...: (a) Engines below 37 kW. Tier 1 and Tier 2 standards for engines below 37 kW apply as specified in 40... Engines Below 37 kW (g/kW-hr) Rated power (kW) Tier Model year NMHC + NOX CO PM kWTier 1 2000 10.5 8.0 1.0 Tier 2 2005 7.5 8.0 0.80 8≤kWTier 1 2000 9.5 6.6 0.80 Tier 2 2005 7.5 6.6 0.80 19≤kWTier...

  16. Integration of end-user Cloud storage for CMS analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez

    End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less

  17. Integration of end-user Cloud storage for CMS analysis

    DOE PAGES

    Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez; ...

    2017-05-19

    End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less

  18. Regulatory Compliance in Multi-Tier Supplier Networks

    NASA Technical Reports Server (NTRS)

    Goossen, Emray R.; Buster, Duke A.

    2014-01-01

    Over the years, avionics systems have increased in complexity to the point where 1st tier suppliers to an aircraft OEM find it financially beneficial to outsource designs of subsystems to 2nd tier and at times to 3rd tier suppliers. Combined with challenging schedule and budgetary pressures, the environment in which safety-critical systems are being developed introduces new hurdles for regulatory agencies and industry. This new environment of both complex systems and tiered development has raised concerns in the ability of the designers to ensure safety considerations are fully addressed throughout the tier levels. This has also raised questions about the sufficiency of current regulatory guidance to ensure: proper flow down of safety awareness, avionics application understanding at the lower tiers, OEM and 1st tier oversight practices, and capabilities of lower tier suppliers. Therefore, NASA established a research project to address Regulatory Compliance in a Multi-tier Supplier Network. This research was divided into three major study efforts: 1. Describe Modern Multi-tier Avionics Development 2. Identify Current Issues in Achieving Safety and Regulatory Compliance 3. Short-term/Long-term Recommendations Toward Higher Assurance Confidence This report presents our findings of the risks, weaknesses, and our recommendations. It also includes a collection of industry-identified risks, an assessment of guideline weaknesses related to multi-tier development of complex avionics systems, and a postulation of potential modifications to guidelines to close the identified risks and weaknesses.

  19. Estimating implementation and operational costs of an integrated tiered CD4 service including laboratory and point of care testing in a remote health district in South Africa.

    PubMed

    Cassim, Naseem; Coetzee, Lindi M; Schnippel, Kathryn; Glencross, Deborah K

    2014-01-01

    An integrated tiered service delivery model (ITSDM) has been proposed to provide 'full-coverage' of CD4 services throughout South Africa. Five tiers are described, defined by testing volumes and number of referring health-facilities. These include: (1) Tier-1/decentralized point-of-care service (POC) in a single site; Tier-2/POC-hub servicing processing < 30-40 samples from 8-10 health-clinics; Tier-3/Community laboratories servicing ∼ 50 health-clinics, processing < 150 samples/day; high-volume centralized laboratories (Tier-4 and Tier-5) processing < 300 or > 600 samples/day and serving > 100 or > 200 health-clinics, respectively. The objective of this study was to establish costs of existing and ITSDM-tiers 1, 2 and 3 in a remote, under-serviced district in South Africa. Historical health-facility workload volumes from the Pixley-ka-Seme district, and the total volumes of CD4 tests performed by the adjacent district referral CD4 laboratories, linked to locations of all referring clinics and related laboratory-to-result turn-around time (LTR-TAT) data, were extracted from the NHLS Corporate-Data-Warehouse for the period April-2012 to March-2013. Tiers were costed separately (as a cost-per-result) including equipment, staffing, reagents and test consumable costs. A one-way sensitivity analyses provided for changes in reagent price, test volumes and personnel time. The lowest cost-per-result was noted for the existing laboratory-based Tiers- 4 and 5 ($6.24 and $5.37 respectively), but with related increased LTR-TAT of > 24-48 hours. Full service coverage with TAT < 6-hours could be achieved with placement of twenty-seven Tier-1/POC or eight Tier-2/POC-hubs, at a cost-per-result of $32.32 and $15.88 respectively. A single district Tier-3 laboratory also ensured 'full service coverage' and < 24 hour LTR-TAT for the district at $7.42 per-test. Implementing a single Tier-3/community laboratory to extend and improve delivery of services in Pixley-ka-Seme, with an estimated local ∼ 12-24-hour LTR-TAT, is ∼ $2 more than existing referred services per-test, but 2-4 fold cheaper than implementing eight Tier-2/POC-hubs or providing twenty-seven Tier-1/POCT CD4 services.

  20. 40 CFR 86.1711-99 - Limitations on sale of Tier 1 vehicles and TLEVs.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 19 2011-07-01 2011-07-01 false Limitations on sale of Tier 1 vehicles... Vehicles and Light-Duty Trucks § 86.1711-99 Limitations on sale of Tier 1 vehicles and TLEVs. (a) In the 2001 and subsequent model years, manufacturers may sell Tier 1 vehicles and TLEVs in the NTR only if...

  1. 40 CFR 86.1711-99 - Limitations on sale of Tier 1 vehicles and TLEVs.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 19 2010-07-01 2010-07-01 false Limitations on sale of Tier 1 vehicles... Vehicles and Light-Duty Trucks § 86.1711-99 Limitations on sale of Tier 1 vehicles and TLEVs. (a) In the 2001 and subsequent model years, manufacturers may sell Tier 1 vehicles and TLEVs in the NTR only if...

  2. Are computational models of any use to psychiatry?

    PubMed

    Huys, Quentin J M; Moutoussis, Michael; Williams, Jonathan

    2011-08-01

    Mathematically rigorous descriptions of key hypotheses and theories are becoming more common in neuroscience and are beginning to be applied to psychiatry. In this article two fictional characters, Dr. Strong and Mr. Micawber, debate the use of such computational models (CMs) in psychiatry. We present four fundamental challenges to the use of CMs in psychiatry: (a) the applicability of mathematical approaches to core concepts in psychiatry such as subjective experiences, conflict and suffering; (b) whether psychiatry is mature enough to allow informative modelling; (c) whether theoretical techniques are powerful enough to approach psychiatric problems; and (d) the issue of communicating clinical concepts to theoreticians and vice versa. We argue that CMs have yet to influence psychiatric practice, but that they help psychiatric research in two fundamental ways: (a) to build better theories integrating psychiatry with neuroscience; and (b) to enforce explicit, global and efficient testing of hypotheses through more powerful analytical methods. CMs allow the complexity of a hypothesis to be rigorously weighed against the complexity of the data. The paper concludes with a discussion of the path ahead. It points to stumbling blocks, like the poor communication between theoretical and medical communities. But it also identifies areas in which the contributions of CMs will likely be pivotal, like an understanding of social influences in psychiatry, and of the co-morbidity structure of psychiatric diseases. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. 29 CFR Appendix D to Part 510 - Municipalities Eligible for Minimum Wage Phase-In

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... other employees are subject to Tier 3. Municipalities which did not submit data are subject to Tier 1... June 1, 1990. If upon review it is determined that the municipality should have been subject to Tier 1... Minimum Wage Phase-In This appendix contains a listing of the municipalities in Puerto Rico and the tier...

  4. Development of a tier 1 R5 clade C simian-human immunodeficiency virus as a tool to test neutralizing antibody-based immunoprophylaxis.

    PubMed

    Siddappa, Nagadenahalli B; Hemashettar, Girish; Wong, Yin Ling; Lakhashe, Samir; Rasmussen, Robert A; Watkins, Jennifer D; Novembre, Francis J; Villinger, François; Else, James G; Montefiori, David C; Ruprecht, Ruth M

    2011-04-01

    While some recently transmitted HIV clade C (HIV-C) strains exhibited tier 1 neutralization phenotypes, most were tier 2 strains (J Virol 2010; 84:1439). Because induction of neutralizing antibodies (nAbs) through vaccination against tier 2 viruses has proven difficult, we have generated a tier 1, clade C simian-human immunodeficiency virus (SHIV-C) to permit efficacy testing of candidate AIDS vaccines against tier 1 viruses. SHIV-1157ipEL was created by swapping env of a late-stage virus with that of a tier 1, early form. After adaptation to rhesus macaques (RM), passaged SHIV-1157ipEL-p replicated vigorously in vitro and in vivo while maintaining R5 tropism. The virus was reproducibly transmissible intrarectally. Phylogenetically, SHIV-1157ipEL-p Env clustered with HIV-C sequences. All RM chronically infected with SHIV-1157ipEL-p developed high nAb titers against autologous as well as heterologous tier 1 strains. SHIV-1157ipEL-p was reproducibly transmitted in RM, induced cross-clade nAbs, and represents a tool to evaluate anti-HIV-C nAb responses in primates. © 2010 John Wiley & Sons A/S.

  5. 20 CFR 228.20 - Reduction for an employee annuity.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Reduction for an employee annuity. 228.20... COMPUTATION OF SURVIVOR ANNUITIES The Tier I Annuity Component § 228.20 Reduction for an employee annuity. (a) General. If an individual is entitled to an annuity as a survivor, and is also entitled to an employee...

  6. To Wait in Tier 1 or Intervene Immediately: A Randomized Experiment Examining First Grade Response to Intervention (RTI) in Reading

    ERIC Educational Resources Information Center

    Al Otaiba, Stephanie; Connor, Carol M.; Folsom, Jessica S.; Wanzek, Jeanne; Greulich, Luana; Schatschneider, Christopher; Wagner, Richard K.

    2015-01-01

    This randomized control study compares the efficacy of two response-to-intervention (RTI) models: (1) Dynamic RTI, which immediately refers grade 1 students with the weakest skills to the most intensive intervention supports (Tier 2 or Tier 3); and (2) Typical RTI, which starts all students in Tier 1 and after 8 weeks, decides whether students who…

  7. Medicare program; revisions to the Medicare Advantage and Part D prescription drug contract determinations, appeals, and intermediate sanctions processes. Final rule with comment period.

    PubMed

    2007-12-05

    This rule with comment period finalizes the Medicare program provisions relating to contract determinations involving Medicare Advantage (MA) organizations and Medicare Part D prescription drug plan sponsors, including eliminating the reconsideration process for review of contract determinations, revising the provisions related to appeals of contract determinations, and clarifying the process for MA organizations and Part D plan sponsors to complete corrective action plans. In this final rule with comment period, we also clarify the intermediate sanction and civil money penalty (CMP) provisions that apply to MA organizations and Medicare Part D prescription drug plan sponsors, modify elements of their compliance plans, retain voluntary self-reporting for Part D sponsors and implement a voluntary self-reporting recommendation for MA organizations, and revise provisions to ensure HHS has access to the books and records of MA organizations and Part D plan sponsors' first tier, downstream, and related entities. Although we have decided not to finalize the mandatory self-reporting provisions that we proposed, CMS remains committed to adopting a mandatory self-reporting requirement. To that end, we are requesting comments that will assist CMS in crafting a future proposed regulation for a mandatory self-reporting requirement.

  8. CMS Data Processing Workflows during an Extended Cosmic Ray Run

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2009-11-01

    The CMS Collaboration conducted a month-long data taking exercise, the Cosmic Run At Four Tesla, during October-November 2008, with the goal of commissioning the experiment for extended operation. With all installed detector systems participating, CMS recorded 270 million cosmic ray events with the solenoid at a magnetic field strength of 3.8 T. This paper describes the data flow from the detector through the various online and offline computing systems, as well as the workflows used for recording the data, for aligning and calibrating the detector, and for analysis of the data.

  9. Multi-Tiered System of Support: Best Differentiation Practices for English Language Learners in Tier 1

    ERIC Educational Resources Information Center

    Izaguirre, Cecilia

    2017-01-01

    Purpose: This qualitative case study explored the best practices of differentiation of Tier 1 instruction within a multi-tiered system of support for English Language Learners who were predominately Spanish speaking. Theoretical Framework: The zone of proximal development theory, cognitive theory, and the affective filter hypothesis guided this…

  10. Tier 1 and Tier 2 Early Intervention for Handwriting and Composing

    ERIC Educational Resources Information Center

    Berninger, Virginia W.; Rutberg, Judith E.; Abbott, Robert D.; Garcia, Noelia; Anderson-Youngstrom, Marci; Brooks, Allison; Fulton, Cynthia

    2006-01-01

    Three studies evaluated Tier 1 early intervention for handwriting at a critical period for literacy development in first grade and one study evaluated Tier 2 early intervention in the critical period between third and fourth grades for composing on high stakes tests. The results contribute to knowledge of research-supported handwriting and…

  11. 45 CFR 150.203 - Circumstances requiring CMS enforcement.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Circumstances requiring CMS enforcement. 150.203... CARE ACCESS CMS ENFORCEMENT IN GROUP AND INDIVIDUAL INSURANCE MARKETS CMS Enforcement Processes for... requiring CMS enforcement. CMS enforces HIPAA requirements to the extent warranted (as determined by CMS) in...

  12. A self-configuring control system for storage and computing departments at INFN-CNAF Tierl

    NASA Astrophysics Data System (ADS)

    Gregori, Daniele; Dal Pra, Stefano; Ricci, Pier Paolo; Pezzi, Michele; Prosperini, Andrea; Sapunenko, Vladimir

    2015-05-01

    The storage and farming departments at the INFN-CNAF Tier1[1] manage approximately thousands of computing nodes and several hundreds of servers that provides access to the disk and tape storage. In particular, the storage server machines should provide the following services: an efficient access to about 15 petabytes of disk space with different cluster of GPFS file system, the data transfers between LHC Tiers sites (Tier0, Tier1 and Tier2) via GridFTP cluster and Xrootd protocol and finally the writing and reading data operations on magnetic tape backend. One of the most important and essential point in order to get a reliable service is a control system that can warn if problems arise and which is able to perform automatic recovery operations in case of service interruptions or major failures. Moreover, during daily operations the configurations can change, i.e. if the GPFS cluster nodes roles can be modified and therefore the obsolete nodes must be removed from the control system production, and the new servers should be added to the ones that are already present. The manual management of all these changes is an operation that can be somewhat difficult in case of several changes, it can also take a long time and is easily subject to human error or misconfiguration. For these reasons we have developed a control system with the feature of self-configure itself if any change occurs. Currently, this system has been in production for about a year at the INFN-CNAF Tier1 with good results and hardly any major drawback. There are three major key points in this system. The first is a software configurator service (e.g. Quattor or Puppet) for the servers machines that we want to monitor with the control system; this service must ensure the presence of appropriate sensors and custom scripts on the nodes to check and should be able to install and update software packages on them. The second key element is a database containing information, according to a suitable format, on all the machines in production and able to provide for each of them the principal information such as the type of hardware, the network switch to which the machine is connected, if the machine is real (physical) or virtual, the possible hypervisor to which it belongs and so on. The last key point is a control system software (in our implementation we choose the Nagios software), capable of assessing the status of the servers and services, and that can attempt to restore the working state, restart or inhibit software services and send suitable alarm messages to the site administrators. The integration of these three elements was made by appropriate scripts and custom implementation that allow the self-configuration of the system according to a decisional logic and the whole combination of all the above-mentioned components will be deeply discussed in this paper.

  13. Optimisation of the usage of LHC and local computing resources in a multidisciplinary physics department hosting a WLCG Tier-2 centre

    NASA Astrophysics Data System (ADS)

    Barberis, Stefano; Carminati, Leonardo; Leveraro, Franco; Mazza, Simone Michele; Perini, Laura; Perlz, Francesco; Rebatto, David; Tura, Ruggero; Vaccarossa, Luca; Villaplana, Miguel

    2015-12-01

    We present the approach of the University of Milan Physics Department and the local unit of INFN to allow and encourage the sharing among different research areas of computing, storage and networking resources (the largest ones being those composing the Milan WLCG Tier-2 centre and tailored to the needs of the ATLAS experiment). Computing resources are organised as independent HTCondor pools, with a global master in charge of monitoring them and optimising their usage. The configuration has to provide satisfactory throughput for both serial and parallel (multicore, MPI) jobs. A combination of local, remote and cloud storage options are available. The experience of users from different research areas operating on this shared infrastructure is discussed. The promising direction of improving scientific computing throughput by federating access to distributed computing and storage also seems to fit very well with the objectives listed in the European Horizon 2020 framework for research and development.

  14. The advantage of calculating emission reduction with local emission factor in South Sumatera region

    NASA Astrophysics Data System (ADS)

    Buchari, Erika

    2017-11-01

    Green House Gases (GHG) which have different Global Warming Potential, usually expressed in CO2 equivalent. German has succeeded in emission reduction of CO2 in year 1990s, while Japan since 2001 increased load factor of public transports. Indonesia National Medium Term Development Plan, 2015-2019, has set up the target of minimum 26% and maximum 41% National Emission Reduction in 2019. Intergovernmental Panel on Climate Change (IPCC), defined three types of accuracy in counting emission of GHG, as tier 1, tier 2, and tier 3. In tier 1, calculation is based on fuel used and average emission (default), which is obtained from statistical data. While in tier 2, calculation is based fuel used and local emission factors. Tier 3 is more accurate from those in tier 1 and 2, and the calculation is based on fuel used from modelling method or from direct measurement. This paper is aimed to evaluate the calculation with tier 2 and tier 3 in South Sumatera region. In 2012, Regional Action Plan for Greenhouse Gases of South Sumatera for 2020 is about 6,569,000 ton per year and with tier 3 is about without mitigation and 6,229,858.468 ton per year. It was found that the calculation in tier 3 is more accurate in terms of fuel used of variation vehicles so that the actions of mitigation can be planned more realistically.

  15. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE PAGES

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...

    2017-09-29

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  16. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  17. Estimating Implementation and Operational Costs of an Integrated Tiered CD4 Service including Laboratory and Point of Care Testing in a Remote Health District in South Africa

    PubMed Central

    Cassim, Naseem; Coetzee, Lindi M.; Schnippel, Kathryn; Glencross, Deborah K.

    2014-01-01

    Background An integrated tiered service delivery model (ITSDM) has been proposed to provide ‘full-coverage’ of CD4 services throughout South Africa. Five tiers are described, defined by testing volumes and number of referring health-facilities. These include: (1) Tier-1/decentralized point-of-care service (POC) in a single site; Tier-2/POC-hub servicing processing <30–40 samples from 8–10 health-clinics; Tier-3/Community laboratories servicing ∼50 health-clinics, processing <150 samples/day; high-volume centralized laboratories (Tier-4 and Tier-5) processing <300 or >600 samples/day and serving >100 or >200 health-clinics, respectively. The objective of this study was to establish costs of existing and ITSDM-tiers 1, 2 and 3 in a remote, under-serviced district in South Africa. Methods Historical health-facility workload volumes from the Pixley-ka-Seme district, and the total volumes of CD4 tests performed by the adjacent district referral CD4 laboratories, linked to locations of all referring clinics and related laboratory-to-result turn-around time (LTR-TAT) data, were extracted from the NHLS Corporate-Data-Warehouse for the period April-2012 to March-2013. Tiers were costed separately (as a cost-per-result) including equipment, staffing, reagents and test consumable costs. A one-way sensitivity analyses provided for changes in reagent price, test volumes and personnel time. Results The lowest cost-per-result was noted for the existing laboratory-based Tiers- 4 and 5 ($6.24 and $5.37 respectively), but with related increased LTR-TAT of >24–48 hours. Full service coverage with TAT <6-hours could be achieved with placement of twenty-seven Tier-1/POC or eight Tier-2/POC-hubs, at a cost-per-result of $32.32 and $15.88 respectively. A single district Tier-3 laboratory also ensured ‘full service coverage’ and <24 hour LTR-TAT for the district at $7.42 per-test. Conclusion Implementing a single Tier-3/community laboratory to extend and improve delivery of services in Pixley-ka-Seme, with an estimated local ∼12–24-hour LTR-TAT, is ∼$2 more than existing referred services per-test, but 2–4 fold cheaper than implementing eight Tier-2/POC-hubs or providing twenty-seven Tier-1/POCT CD4 services. PMID:25517412

  18. 50 CFR 660.211 - Fixed gear fishery-definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... vessel registered to a limited entry fixed gear permit(s) with a Tier 1, Tier 2, and/or Tier 3... fishery or sablefish tier limit fishery means, for the limited entry fixed gear sablefish fishery north of... tier limit and when they are not eligible to fish in the DTL fishery. Sablefish primary season means...

  19. Tier-specific evolution of match performance characteristics in the English Premier League: it's getting tougher at the top.

    PubMed

    Bradley, Paul S; Archer, David T; Hogg, Bob; Schuth, Gabor; Bush, Michael; Carling, Chris; Barnes, Chris

    2016-01-01

    This study investigated the evolution of physical and technical performances in the English Premier League (EPL), with special reference to league ranking. Match performance observations (n = 14,700) were collected using a multiple-camera computerised tracking system across seven consecutive EPL seasons (2006-07 to 2012-13). Final league rankings were classified into Tiers: (A) 1st-4th ranking (n = 2519), (B) 5th-8th ranking (n = 2965), (C) 9th-14th ranking (n = 4448) and (D) 15th-20th ranking (n = 4768). Teams in Tier B demonstrated moderate increases in high-intensity running distance while in ball possession from the 2006-07 to 2012-13 season (P < 0.001; effect size [ES]: 0.68), with Tiers A, C and D producing less pronounced increases across the same period (P < 0.005; ES: 0.26, 0.41 and 0.33, respectively). Large increases in sprint distance were observed from the 2006-07 to 2012-13 season for Tier B (P < 0.001; ES: 1.21), while only moderate increases were evident for Tiers A, C and D (P < 0.001; ES: 0.75, 0.97 and 0.84, respectively). Tier B demonstrated large increases in the number of passes performed and received in 2012-13 compared to 2006-07 (P < 0.001; ES: 1.32-1.53) with small-to-moderate increases in Tier A (P < 0.001; ES: 0.30-0.38), Tier C (P < 0.001; ES: 0.46-0.54) and Tier D (P < 0.001; ES: 0.69-0.87). The demarcation line between 4th (bottom of Tier A) and 5th ranking (top of Tier B) in the 2006-07 season was 8 points, but this decreased to just a single point in the 2012-13 season. The data demonstrate that physical and technical performances have evolved more in Tier B than any other Tier in the EPL and could indicate a narrowing of the performance gap between the top two Tiers.

  20. 76 FR 3184 - Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-19

    ...); (ii) establish a VIX Tier Appointment; (iii) amend the monthly fee for Floor Broker Trading Permits... demutualization, CBOE amended its Fees Schedule to establish Trading Permit, tier appointment and bandwidth packet... permit ($) 1 permit 10 permits 6,000 Tier 1 11 permits 20 permits 4,800 Tier 2 21 or more permits...

  1. Muscle Velocity and Inertial Force from Phase Contrast Magnetic Resonance Imaging

    PubMed Central

    Wentland, Andrew L.; McWalter, Emily J.; Pal, Saikat; Delp, Scott L.; Gold, Garry E.

    2014-01-01

    Purpose To evaluate velocity waveforms in muscle and to create a tool and algorithm for computing and analyzing muscle inertial forces derived from 2D phase contrast (PC) MRI. Materials and Methods PC MRI was performed in the forearm of four healthy volunteers during 1 Hz cycles of wrist flexion-extension as well as in the lower leg of six healthy volunteers during 1 Hz cycles of plantarflexion-dorsiflexion. Inertial forces (F) were derived via the equation F = ma. The mass, m, was derived by multiplying voxel volume by voxel-by-voxel estimates of density via fat-water separation techniques. Acceleration, a, was obtained via the derivative of the PC MRI velocity waveform. Results Mean velocities in the flexors of the forearm and lower leg were 1.94 ± 0.97 cm/s and 5.57 ± 2.72 cm/s, respectively, as averaged across all subjects; the inertial forces in the flexors of the forearm and lower leg were 1.9 × 10-3 ± 1.3 × 10-3 N and 1.1 × 10-2 ± 6.1 × 10-3 N, respectively, as averaged across all subjects. Conclusion PC MRI provided a promising means of computing muscle velocities and inertial forces—providing the first method for quantifying inertial forces. PMID:25425185

  2. Operationally Responsive Space: Creating Responsive Space for America

    DTIC Science & Technology

    2008-06-20

    programs. Items identified for improvement include: 1 ) Fractured management and accounting , 2 ) Satellite availability, 3) Facilities (Processing...enveloped by our three output tiers. The three tiers: • Tier 1 is an immediate response taking minutes to days. • Tier 2 is a mid term response taking...needs to improve. 2 . Develop a philosophy. 3. Set a direction with specific goals (Fuchs 1 -7). The Department of Space should follow these

  3. 42 CFR 447.514 - Upper limits for multiple source drugs.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... State agency plus an amount established by CMS that is equal to 250 percent of the AMP (as computed... will consider the following additional criteria: (1) The AMP of a terminated NDC will not be used to... section, the AMP of the lowest priced therapeutically and pharmaceutically equivalent drug that is not...

  4. Theory of compressive modeling and simulation

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Cha, Jae; Espinola, Richard L.; Krapels, Keith

    2013-05-01

    Modeling and Simulation (M&S) has been evolving along two general directions: (i) data-rich approach suffering the curse of dimensionality and (ii) equation-rich approach suffering computing power and turnaround time. We suggest a third approach. We call it (iii) compressive M&S (CM&S); because the basic Minimum Free-Helmholtz Energy (MFE) facilitating CM&S can reproduce and generalize Candes, Romberg, Tao & Donoho (CRT&D) Compressive Sensing (CS) paradigm as a linear Lagrange Constraint Neural network (LCNN) algorithm. CM&S based MFE can generalize LCNN to 2nd order as Nonlinear augmented LCNN. For example, during the sunset, we can avoid a reddish bias of sunlight illumination due to a long-range Rayleigh scattering over the horizon. With CM&S we can take instead of day camera, a night vision camera. We decomposed long wave infrared (LWIR) band with filter into 2 vector components (8~10μm and 10~12μm) and used LCNN to find pixel by pixel the map of Emissive-Equivalent Planck Radiation Sources (EPRS). Then, we up-shifted consistently, according to de-mixed sources map, to the sub-micron RGB color image. Moreover, the night vision imaging can also be down-shifted at Passive Millimeter Wave (PMMW) imaging, suffering less blur owing to dusty smokes scattering and enjoying apparent smoothness of surface reflectivity of man-made objects under the Rayleigh resolution. One loses three orders of magnitudes in the spatial Rayleigh resolution; but gains two orders of magnitude in the reflectivity, and gains another two orders in the propagation without obscuring smog . Since CM&S can generate missing data and hard to get dynamic transients, CM&S can reduce unnecessary measurements and their associated cost and computing in the sense of super-saving CS: measuring one & getting one's neighborhood free .

  5. HIV Neutralizing Antibodies Induced by Native-like Envelope Trimers

    PubMed Central

    Sanders, Rogier W.; van Gils, Marit J.; Derking, Ronald; Sok, Devin; Ketas, Thomas J.; Burger, Judith A.; Ozorowski, Gabriel; Cupo, Albert; Simonich, Cassandra; Goo, Leslie; Arendt, Heather; Kim, Helen J.; Lee, Jeong Hyun; Pugach, Pavel; Williams, Melissa; Debnath, Gargi; Moldt, Brian; van Breemen, Mariëlle J.; Isik, Gözde; Medina-Ramírez, Max; Back, Jaap Willem; Koff, Wayne; Julien, Jean-Philippe; Rakasz, Eva G.; Seaman, Michael S.; Guttman, Miklos; Lee, Kelly K.; Klasse, Per Johan; LaBranche, Celia; Schief, William R.; Wilson, Ian A.; Overbaugh, Julie; Burton, Dennis R.; Ward, Andrew B.; Montefiori, David C.; Dean, Hansi; Moore, John P.

    2015-01-01

    A challenge for HIV-1 immunogen design is inducing neutralizing antibodies (NAbs) against neutralization-resistant (Tier-2) viruses that dominate human transmissions. We show that a soluble recombinant HIV-1 envelope glycoprotein trimer that adopts a native conformation (BG505 SOSIP.664) induced NAbs potently against the sequence-matched Tier-2 virus in rabbits and similar but weaker responses in macaques. The trimer also consistently induced cross-reactive NAbs against more sensitive (Tier-1) viruses. Tier-2 NAbs recognized conformational epitopes that differed between animals and in some cases overlapped with those recognized by broadly neutralizing antibodies (bNAbs), whereas Tier-1 responses targeted linear V3 epitopes. A second trimer, B41 SOSIP.664, also induced a strong autologous Tier-2 NAb response in rabbits. Thus, native-like trimers represent a promising starting point for developing HIV-1 vaccines aimed at inducing bNAbs. PMID:26089353

  6. Connecting Restricted, High-Availability, or Low-Latency Resources to a Seamless Global Pool for CMS

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B.; Hufnagel, D.; Hurtado Anampa, K.; Jayatilaka, B.; Khan, F.; Larson, K.; Letts, J.; Mascheroni, M.; Mohapatra, A.; Marra Da Silva, J.; Mason, D.; Perez-Calero Yzquierdo, A.; Piperov, S.; Tiradani, A.; Verguilov, V.; CMS Collaboration

    2017-10-01

    The connection of diverse and sometimes non-Grid enabled resource types to the CMS Global Pool, which is based on HTCondor and glideinWMS, has been a major goal of CMS. These resources range in type from a high-availability, low latency facility at CERN for urgent calibration studies, called the CAF, to a local user facility at the Fermilab LPC, allocation-based computing resources at NERSC and SDSC, opportunistic resources provided through the Open Science Grid, commercial clouds, and others, as well as access to opportunistic cycles on the CMS High Level Trigger farm. In addition, we have provided the capability to give priority to local users of beyond WLCG pledged resources at CMS sites. Many of the solutions employed to bring these diverse resource types into the Global Pool have common elements, while some are very specific to a particular project. This paper details some of the strategies and solutions used to access these resources through the Global Pool in a seamless manner.

  7. 50 CFR 86.53 - What are funding tiers?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 6 2010-10-01 2010-10-01 false What are funding tiers? 86.53 Section 86... project merits. (d) We describe the two tiers as follows: (1) Tier One Projects. (i) You may submit a... $100,000 of Federal funds for any given fiscal year. (ii) Tier One projects must meet the eligibility...

  8. Effects of a Tier 3 Self-Management Intervention Implemented with and without Treatment Integrity

    ERIC Educational Resources Information Center

    Lower, Ashley; Young, K. Richard; Christensen, Lynnette; Caldarella, Paul; Williams, Leslie; Wills, Howard

    2016-01-01

    This study investigated the effects of a Tier 3 peer-matching self-management intervention on two elementary school students who had previously been less responsive to Tier 1 and Tier 2 interventions. The Tier 3 self-management intervention, which was implemented in the general education classrooms, included daily electronic communication between…

  9. Role of ion-pair states in the predissociation dynamics of Rydberg states of molecular iodine.

    PubMed

    von Vangerow, J; Bogomolov, A S; Dozmorov, N V; Schomas, D; Stienkemeier, F; Baklanov, A V; Mudrich, M

    2016-07-28

    Using femtosecond pump-probe ion imaging spectroscopy, we establish the key role of I(+) + I(-) ion-pair (IP) states in the predissociation dynamics of molecular iodine I2 excited to Rydberg states. Two-photon excitation of Rydberg states lying above the lowest IP state dissociation threshold (1st tier) is found to be followed by direct parallel transitions into IP states of the 1st tier asymptotically correlating to a pair of I ions in their lowest states I(+)((3)P2) + I(-)((1)S0), of the 2nd tier correlating to I(+)((3)P0) + I(-)((1)S0), and of the 3rd tier correlating to I(+)((1)D2) + I(-)((1)S0). Predissociation via the 1st tier proceeds presumably with a delay of 1.6-1.7 ps which is close to the vibrational period in the 3rd tier state (3rd tier-mediated process). The 2nd tier IP state is concluded to be the main precursor for predissociation via lower lying Rydberg states proceeding with a characteristic time of 7-8 ps and giving rise to Rydberg atoms I(5s(2)5p(4)6s(1)). The channel generating I((2)P3/2) + I((2)P1/2) atoms with total kinetic energy corresponding to one-photon excitation is found to proceed via a pump - dump mechanism with dramatic change of angular anisotropy of this channel as compared with earlier nanosecond experiments.

  10. 26 CFR 1.1248-7 - Taxpayer to establish earnings and profits and foreign taxes.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... shall also show for the first tier corporation, and for each lower tier corporation as to which...) of § 1.1248-2, and (iv) If the amount of earnings and profits of a lower tier corporation... lower tier corporation which the taxpayer owns within the meaning of section 958(a)(2)(b) the total...

  11. Circulation patterns in the deep Subtropical Northeast Atlantic with ARGO data

    NASA Astrophysics Data System (ADS)

    Calheiros, Tomas; Bashmachnikov, Igor

    2014-05-01

    In this work we study the dominant circulation patterns in the Subtropical Northeast Atlantic using ARGO data [25-45o N, 5-35o W]. The data were obtained from the Coriolis operational data center (ftp://ftp.ifremer.fr) for the years 1999-2013. During this period of time in the study there were available area 376 floats with 15062 float-months of total time. The floats were launched in the depths range between 300 and 2000 m, but most of the floats were concentrated at 1000 m (2000 float-months) and 1500 m (3400 float-months). In the upper 400-m layer there were also about 1000 float-months, but their number and distribution did not allow analysis of the mean currents over the study region. For each float position Lagrangian current velocity was computed as the difference between the position when the buoy started sinking to the reference depth and the consequent position of surfacing of the float, divided by the respective time interval. This allowed reducing the noise related with sea-surface drift of the buoys during the data-transmission periods. Mean Eulerian velocity and its error were computed in each of the 2ox2o square. Whenever in a 2ox2o square more than 150 observations of the Lagrangian velocity were available, the square was split into 4 smaller 1ox1o squares, in each of which the mean Eulerian velocities and their errors were estimated. Eulerian currents at 1000 m, as well as at 1500 m depth formed an overall anticyclonic circulation pattern in the study region. The modal velocity of all buoys at 1000 m level was 4 cm/s with an error of the mean of 1.8 cm/s. The modal velocity of all buoys at 1500m was 3 cm/s with an error of the mean of 1.4 cm/s. The southwestward flows near the Madeira Island and further westwards flow along the zonal band of 25-30o N at 1500 m depth well corresponded to the extension of the deep fraction of the Mediterranean Water salt tong.

  12. Task Selection, Task Switching and Multitasking during Computer-Based Independent Study

    ERIC Educational Resources Information Center

    Judd, Terry

    2015-01-01

    Detailed logs of students' computer use, during independent study sessions, were captured in an open-access computer laboratory. Each log consisted of a chronological sequence of tasks representing either the application or the Internet domain displayed in the workstation's active window. Each task was classified using a three-tier schema…

  13. 20 CFR 225.23 - Combined Earnings PIA used in survivor annuities.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... section 215 of the Social Security Act as in effect on December 31, 1974. It is computed using the... RAILROAD RETIREMENT ACT PRIMARY INSURANCE AMOUNT DETERMINATIONS PIA's Used in Computing Survivor Annuities... annuities. The Combined Earnings PIA used in survivor annuities may be used in computing the tier II...

  14. 20 CFR 225.24 - SS Earnings PIA used in survivor annuities.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Security Earnings PIA (SS Earnings PIA) used in survivor annuities may be used in computing the tier II... the Social Security Act as in effect on December 31, 1974. It is computed using the deceased employee... RETIREMENT ACT PRIMARY INSURANCE AMOUNT DETERMINATIONS PIA's Used in Computing Survivor Annuities and the...

  15. Wireless Testbed Bonsai

    DTIC Science & Technology

    2006-02-01

    wireless sensor device network, and a about 200 Stargate nodes higher-tier multi-hop peer- to-peer 802.11b wireless network. Leading up to the full ExScal...deployment, we conducted spatial scaling tests on our higher-tier protocols on a 7 × 7 grid of Stargates nodes 45m and with 90m separations respectively...onW and its scaled version W̃ . III. EXPERIMENTAL SETUP Description of Kansei testbed. A stargate is a single board linux-based computer [7]. It uses a

  16. 76 FR 45742 - Fisheries of the Northeastern United States; Atlantic Mackerel, Squid, and Butterfish Fisheries...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-01

    ... vessel must have landed at least 400,000 lb (181.44 mt) in any one year 1997-2005 to qualify for a Tier 1... Tier 2 permit; or at least 1,000 lb (0.45 mt) in any one year March 1, 1994--December 31, 2005, to qualify for a Tier 3 permit, with Tier 3 allocated up to 7 percent of the commercial quota, through the...

  17. The effect of resolution on viscous dissipation measured with 4D flow MRI in patients with Fontan circulation: Evaluation using computational fluid dynamics

    PubMed Central

    Cibis, Merih; Jarvis, Kelly; Markl, Michael; Rose, Michael; Rigsby, Cynthia; Barker, Alex J.; Wentzel, Jolanda J.

    2016-01-01

    Viscous dissipation inside Fontan circulation, a parameter associated with the exercise intolerance of Fontan patients, can be derived from computational fluid dynamics (CFD) or 4D flow MRI velocities. However, the impact of spatial resolution and measurement noise on the estimation of viscous dissipation is unclear. Our aim was to evaluate the influence of these parameters on viscous dissipation calculation. Six Fontan patients underwent whole heart 4D flow MRI. Subject-specific CFD simulations were performed. The CFD velocities were down-sampled to isotropic spatial resolutions of 0.5 mm, 1 mm, 2 mm and to MRI resolution. Viscous dissipation was compared between (1) high resolution CFD velocities, (2) CFD velocities down-sampled to MRI resolution, (3) down-sampled CFD velocities with MRI mimicked noise levels, and (4) in-vivo 4D flow MRI velocities. Relative viscous dissipation between subjects was also calculated. 4D flow MRI velocities (15.6±3.8 cm/s) were higher, although not significantly different than CFD velocities (13.8±4.7 cm/s, p=0.16), down-sampled CFD velocities (12.3±4.4 cm/s, p=0.06) and the down-sampled CFD velocities with noise (13.2±4.2 cm/s, p=0.06). CFD-based viscous dissipation (0.81±0.55 mW) was significantly higher than those based on down-sampled CFD (0.25±0.19 mW, p=0.03), down-sampled CFD with noise (0.49±0.26 mW, p=0.03) and 4D flow MRI (0.56±0.28 mW, p=0.06). Nevertheless, relative viscous dissipation between different subjects was maintained irrespective of resolution and noise, suggesting that comparison of viscous dissipation between patients is still possible. PMID:26298492

  18. 40 CFR Table 1 to Subpart Qqqqq of... - Applicability of General Provisions to Subpart QQQQQ

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... § 63.8(e) CMS Performance Evaluation No Subpart QQQQQ does not require CMS performance evaluations... QQQQQ does not require performance tests or CMS performance evaluations. § 63.9(e) Notification of... CMS No Subpart QQQQQ does not require CMS performance evaluations. § 63.10(a), (b), (d)(1), (d)(4)-(5...

  19. 78 FR 68895 - Self-Regulatory Organizations; NASDAQ OMX PHLX LLC; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-15

    ... Options Classes, Category A Category B excluding SPY Options (Monthly) Tier 1 0.00%-0.75% 0.00 0.00 Tier 2 Above 0.75%-1.60% 0.12 0.17 Tier 3 Above 1.60%-2.60% 0.14 0.17 Tier 4 Above 2.60% 0.15 0.17 The Exchange... Securities Exchange Act of 1934 (``Act'') \\1\\ and Rule 19b-4 thereunder,\\2\\ notice is hereby given that, on...

  20. An Examination of the Efficacy of a Multitiered Intervention on Early Reading Outcomes for First Grade Students at Risk for Reading Difficulties.

    PubMed

    Fien, Hank; Smith, Jean Louise M; Smolkowski, Keith; Baker, Scott K; Nelson, Nancy J; Chaparro, Erin

    2015-01-01

    This article presents findings of an efficacy trial examining the effect of a multitiered instruction and intervention model on first grade at-risk students' reading outcomes. Schools (N = 16) were randomly assigned to the treatment or control condition. In the fall of Grade 1, students were assigned to an instructional tier on the basis of Stanford Achievement Test-10th Edition scores (31st percentile and above = Tier 1; from the 10th to the 30th percentile = Tier 2). In both conditions, students identified as at risk (i.e., Tier 2; n = 267) received 90 min of whole group instruction (Tier 1) and an additional 30 min of daily small group intervention (Tier 2). In the treatment condition, teachers were trained to enhance core reading instruction by making instruction more explicit and increasing practice opportunities for students in Tier 1. In addition, at-risk readers were provided an additional 30-min daily small group intervention with content that was highly aligned with the Tier 1 core reading program. Results indicate significant, positive effects of the intervention on students' decoding and first semester fluent reading and potentially positive effects on reading comprehension and total reading achievement. © Hammill Institute on Disabilities 2014.

  1. Immunologic response among HIV-infected patients enrolled in a graduated cost-recovery programme of antiretroviral therapy delivery in Chennai, India.

    PubMed

    Solomon, Sunil Suhas; Ganesh, Aylur K; Mehta, Shruti H; Yepthomi, Tokugha; Balaji, Kavitha; Anand, Santhanam; Gallant, Joel E; Solomon, Suniti

    2013-06-01

    Sustainability of free antiretroviral therapy (ART) roll out programmes in resource-limited settings is challenging given the need for lifelong therapy and lack of effective vaccine. This study was undertaken to compare treatment outcomes among HIV-infected patients enrolled in a graduated cost-recovery programme of ART delivery in Chennai, India. Financial status of patients accessing care at a tertiary care centre, YRGCARE, Chennai, was assessed using an economic survey; patients were distributed into tiers 1- 4 requiring them to pay 0, 50, 75 or 100 per cent of their medication costs, respectively. A total of 1754 participants (ART naοve = 244) were enrolled from February 2005-January 2008 with the following distribution: tier 1=371; tier 2=338; tier 3=693; tier 4=352. Linear regression models with generalized estimating equations were used to examine immunological response among patients across the four tiers. Median age was 34; 73 per cent were male, and the majority were on nevirapine-based regimens. Median follow up was 11.1 months. The mean increase in CD4 cell count within the 1 st three months of HAART was 50.3 cells/μl per month in tier 1. Compared to those in tier 1, persons in tiers 2, 3 and 4 had comparable increases (49.7, 57.0, and 50.9 cells/μl per month, respectively). Increases in subsequent periods (3-18 and >18 months) were also comparable across tiers. No differential CD4 gains across tiers were observed when the analysis was restricted to patients initiating ART under the GCR programme. This ART delivery model was associated with significant CD4 gains with no observable difference by how much patients paid. Importantly, gains were comparable to those in other free rollout programmes. Additional cost-effectiveness analyses and mathematical modelling would be needed to determine whether such a delivery programme is a sustainable alternative to free ART programmes.

  2. Building Tier 3 Intervention for Long-Term Slow Growers in Grades 3-4: A Pilot Study

    ERIC Educational Resources Information Center

    Sanchez, Victoria; O'Connor, Rollanda E.

    2015-01-01

    Tier 3 interventions are necessary for students who fail to respond adequately to Tier 1 general education instruction and Tier 2 supplemental reading intervention instruction. We identified 8 students in 3rd and 4th grade who had demonstrated a slow response to Tier 2 reading interventions for three years. Students participated in a…

  3. Assessment of the concordance among 2-tier, 3-tier, and 5-tier fetal heart rate classification systems.

    PubMed

    Gyamfi Bannerman, Cynthia; Grobman, William A; Antoniewicz, Leah; Hutchinson, Maria; Blackwell, Sean

    2011-09-01

    In 2008, a National Institute of Child Health and Human Development/Society for Maternal-Fetal Medicine-sponsored workshop on electronic fetal monitoring recommended a new fetal heart tracing interpretation system. Comparison of this 3-tier system with other systems is lacking. Our purpose was to determine the relationships between fetal heart rate categories for the 3 existing systems. Three Maternal-Fetal Medicine specialists reviewed 120 fetal heart rates. All tracings were from term, singleton pregnancies with known umbilical artery pH. The fetal heart rates were classified by a 2-tier, 3-tier, and 5-tier system. Each Maternal-Fetal Medicine examiner reviewed 120 fetal heart rate segments. When compared with the 2-tier system, 0%, 54%, and 100% tracings in categories 1, 2, and 3 were "nonreassuring." There was strong concordance between category 1 and "green" as well as category 3 and "red" tracings. The 3-tier and 5-tier systems were similar in fetal heart rate interpretations for tracings that were either very normal or very abnormal. Whether one system is superior to the others in predicting fetal acidemia remains unknown. Copyright © 2011 Mosby, Inc. All rights reserved.

  4. 26 CFR 1.444-4 - Tiered structure.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 6 2010-04-01 2010-04-01 false Tiered structure. 1.444-4 Section 1.444-4 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Accounting Periods § 1.444-4 Tiered structure. (a) Electing small business trusts. For...

  5. XENOENDOCRINE DISRUPTERS-TIERED SCREENING AND TESTING: FILLING KEY DATA GAPS

    EPA Science Inventory

    The US Environmental Protection Agency (EPA) is developing a screening and testing program for endocrine disrupting chemicals (EDCs). High priority chemicals would be evaluated in the Tier 1 Screening (T1S) battery. Chemicals positive in T1S would then be tested (Tier 2). T1S...

  6. Expanding clarity or confusion? Volatility of the 5-tier ratings assessing quality of transplant centers in the United States.

    PubMed

    Schold, Jesse D; Andreoni, Kenneth A; Chandraker, Anil K; Gaston, Robert S; Locke, Jayme E; Mathur, Amit K; Pruett, Timothy L; Rana, Abbas; Ratner, Lloyd E; Buccini, Laura D

    2018-06-01

    Outcomes of patients receiving solid organ transplants in the United States are systematically aggregated into bi-annual Program-Specific Reports (PSRs) detailing risk-adjusted survival by transplant center. Recently, the Scientific Registry of Transplant Recipients (SRTR) issued 5-tier ratings evaluating centers based on risk-adjusted 1-year graft survival. Our primary aim was to examine the reliability of 5-tier ratings over time. Using 10 consecutive PSRs for adult kidney transplant centers from June 2012 to December 2016 (n = 208), we applied 5-tier ratings to center outcomes and evaluated ratings over time. From the baseline period (June 2012), 47% of centers had at least a 1-unit tier change within 6 months, 66% by 1 year, and 94% by 3 years. Similarly, 46% of centers had at least a 2-unit tier change by 3 years. In comparison, 15% of centers had a change in the traditional 3-tier rating at 3 years. The 5-tier ratings at 4 years had minimal association with baseline rating (Kappa 0.07, 95% confidence interval [CI] -0.002 to 0.158). Centers had a median of 3 different 5-tier ratings over the period (q1 = 2, q3 = 4). Findings were consistent for center volume, transplant rate, and baseline 5-tier rating. Cumulatively, results suggest that 5-tier ratings are highly volatile, limiting their utility for informing potential stakeholders, particularly transplant candidates given expected waiting times between wait listing and transplantation. © 2018 The American Society of Transplantation and the American Society of Transplant Surgeons.

  7. Senp1 drives hypoxia-induced polycythemia via GATA1 and Bcl-xL in subjects with Monge’s disease

    PubMed Central

    Azad, Priti; Zhao, Huiwen W.; Ronen, Roy; Zhou, Dan; Poulsen, Orit; Hsiao, Yu Hsin; Bafna, Vineet

    2016-01-01

    In this study, because excessive polycythemia is a predominant trait in some high-altitude dwellers (chronic mountain sickness [CMS] or Monge’s disease) but not others living at the same altitude in the Andes, we took advantage of this human experiment of nature and used a combination of induced pluripotent stem cell technology, genomics, and molecular biology in this unique population to understand the molecular basis for hypoxia-induced excessive polycythemia. As compared with sea-level controls and non-CMS subjects who responded to hypoxia by increasing their RBCs modestly or not at all, respectively, CMS cells increased theirs remarkably (up to 60-fold). Although there was a switch from fetal to adult HgbA0 in all populations and a concomitant shift in oxygen binding, we found that CMS cells matured faster and had a higher efficiency and proliferative potential than non-CMS cells. We also established that SENP1 plays a critical role in the differential erythropoietic response of CMS and non-CMS subjects: we can convert the CMS phenotype into that of non-CMS and vice versa by altering SENP1 levels. We also demonstrated that GATA1 is an essential downstream target of SENP1 and that the differential expression and response of GATA1 and Bcl-xL are a key mechanism underlying CMS pathology. PMID:27821551

  8. Senp1 drives hypoxia-induced polycythemia via GATA1 and Bcl-xL in subjects with Monge's disease.

    PubMed

    Azad, Priti; Zhao, Huiwen W; Cabrales, Pedro J; Ronen, Roy; Zhou, Dan; Poulsen, Orit; Appenzeller, Otto; Hsiao, Yu Hsin; Bafna, Vineet; Haddad, Gabriel G

    2016-11-14

    In this study, because excessive polycythemia is a predominant trait in some high-altitude dwellers (chronic mountain sickness [CMS] or Monge's disease) but not others living at the same altitude in the Andes, we took advantage of this human experiment of nature and used a combination of induced pluripotent stem cell technology, genomics, and molecular biology in this unique population to understand the molecular basis for hypoxia-induced excessive polycythemia. As compared with sea-level controls and non-CMS subjects who responded to hypoxia by increasing their RBCs modestly or not at all, respectively, CMS cells increased theirs remarkably (up to 60-fold). Although there was a switch from fetal to adult HgbA0 in all populations and a concomitant shift in oxygen binding, we found that CMS cells matured faster and had a higher efficiency and proliferative potential than non-CMS cells. We also established that SENP1 plays a critical role in the differential erythropoietic response of CMS and non-CMS subjects: we can convert the CMS phenotype into that of non-CMS and vice versa by altering SENP1 levels. We also demonstrated that GATA1 is an essential downstream target of SENP1 and that the differential expression and response of GATA1 and Bcl-xL are a key mechanism underlying CMS pathology. © 2016 Azad et al.

  9. Three-tier rough superhydrophobic surfaces

    NASA Astrophysics Data System (ADS)

    Cao, Yuanzhi; Yuan, Longyan; Hu, Bin; Zhou, Jun

    2015-08-01

    A three-tier rough superhydrophobic surface was fabricated by growing hydrophobic modified (fluorinated silane) zinc oxide (ZnO)/copper oxide (CuO) hetero-hierarchical structures on silicon (Si) micro-pillar arrays. Compared with the other three control samples with a less rough tier, the three-tier surface exhibits the best water repellency with the largest contact angle 161° and the lowest sliding angle 0.5°. It also shows a robust Cassie state which enables the water to flow with a speed over 2 m s-1. In addition, it could prevent itself from being wetted by the droplet with low surface tension (mixed water and ethanol 1:1 in volume) which reveals a flow speed of 0.6 m s-1 (dropped from the height of 2 cm). All these features prove that adding another rough tier on a two-tier rough surface could futher improve its water-repellent properties.

  10. Three-tier rough superhydrophobic surfaces.

    PubMed

    Cao, Yuanzhi; Yuan, Longyan; Hu, Bin; Zhou, Jun

    2015-08-07

    A three-tier rough superhydrophobic surface was fabricated by growing hydrophobic modified (fluorinated silane) zinc oxide (ZnO)/copper oxide (CuO) hetero-hierarchical structures on silicon (Si) micro-pillar arrays. Compared with the other three control samples with a less rough tier, the three-tier surface exhibits the best water repellency with the largest contact angle 161° and the lowest sliding angle 0.5°. It also shows a robust Cassie state which enables the water to flow with a speed over 2 m s(-1). In addition, it could prevent itself from being wetted by the droplet with low surface tension (mixed water and ethanol 1:1 in volume) which reveals a flow speed of 0.6 m s(-1) (dropped from the height of 2 cm). All these features prove that adding another rough tier on a two-tier rough surface could futher improve its water-repellent properties.

  11. 12 CFR 3.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., deferred tax assets, and credit-enhancing interest-only strips, that are deducted from Tier 1 capital, and minus nonfinancial equity investments for which a Tier 1 capital deduction is required pursuant to... carry out the purposes of this part. (b) Bank means a national banking association. (c) Tier 1 capital...

  12. A single, continuous metric to define tiered serum neutralization potency against HIV

    DOE PAGES

    Hraber, Peter Thomas; Korber, Bette Tina Marie; Wagh, Kshitij; ...

    2018-01-19

    HIV-1 Envelope (Env) variants are grouped into tiers by their neutralization-sensitivity phenotype. This helped to recognize that tier 1 neutralization responses can be elicited readily, but do not protect against new infections. Tier 3 viruses are the least sensitive to neutralization. Because most circulating viruses are tier 2, vaccines that elicit neutralization responses against them are needed. While tier classification is widely used for viruses, a way to rate serum or antibody neutralization responses in comparable terms is needed. Logistic regression of neutralization outcomes summarizes serum or antibody potency on a continuous, tier-like scale. It also tests significance of themore » neutralization score, to indicate cases where serum response does not depend on virus tiers. The method can standardize results from different virus panels, and could lead to high-throughput assays, which evaluate a single serum dilution, rather than a dilution series, for more efficient use of limited resources to screen samples from vaccinees.« less

  13. A single, continuous metric to define tiered serum neutralization potency against HIV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hraber, Peter Thomas; Korber, Bette Tina Marie; Wagh, Kshitij

    HIV-1 Envelope (Env) variants are grouped into tiers by their neutralization-sensitivity phenotype. This helped to recognize that tier 1 neutralization responses can be elicited readily, but do not protect against new infections. Tier 3 viruses are the least sensitive to neutralization. Because most circulating viruses are tier 2, vaccines that elicit neutralization responses against them are needed. While tier classification is widely used for viruses, a way to rate serum or antibody neutralization responses in comparable terms is needed. Logistic regression of neutralization outcomes summarizes serum or antibody potency on a continuous, tier-like scale. It also tests significance of themore » neutralization score, to indicate cases where serum response does not depend on virus tiers. The method can standardize results from different virus panels, and could lead to high-throughput assays, which evaluate a single serum dilution, rather than a dilution series, for more efficient use of limited resources to screen samples from vaccinees.« less

  14. Pharmacokinetics of colistin methanesulfonate (CMS) in healthy Chinese subjects after single and multiple intravenous doses.

    PubMed

    Zhao, Miao; Wu, Xiao-Jie; Fan, Ya-Xin; Zhang, Ying-Yuan; Guo, Bei-Ning; Yu, Ji-Cheng; Cao, Guo-Ying; Chen, Yuan-Cheng; Wu, Ju-Fang; Shi, Yao-Guo; Li, Jian; Zhang, Jing

    2018-05-01

    The high prevalence of extensively drug-resistant Gram-negative pathogens has forced clinicians to use colistin as a last-line therapy. Knowledge on the pharmacokinetics of colistin methanesulfonate (CMS), an inactive prodrug, and colistin has increased substantially; however, the pharmacokinetics in the Chinese population is still unknown due to lack of a CMS product in China. This study aimed to evaluate the pharmacokinetics of a new CMS product developed in China in order to optimise dosing regimens. A total of 24 healthy subjects (12 female, 12 male) were enrolled in single- and multiple-dose pharmacokinetic (PK) studies. Concentrations of CMS and formed colistin in plasma and urine were measured, and PK analysis was conducted using a non-compartmental approach. Following a single CMS dose [2.36 mg colistin base activity (CBA) per kg, 1 h infusion], peak concentrations (C max ) of CMS and formed colistin were 18.0 mg/L and 0.661 mg/L, respectively. The estimated half-life (t 1/2 ) of CMS and colistin were 1.38 h and 4.49 h, respectively. Approximately 62.5% of the CMS dose was excreted via urine within 24 h after dosing, whilst only 1.28% was present in the form of colistin. Following multiple CMS doses, colistin reached steady-state within 24 h; there was no accumulation of CMS, but colistin accumulated slightly (R AUC  = 1.33). This study provides the first PK data in the Chinese population and is essential for designing CMS dosing regimens for use in Chinese hospitals. The urinary PK data strongly support the use of intravenous CMS for serious urinary tract infections. Copyright © 2018 Elsevier B.V. and International Society of Chemotherapy. All rights reserved.

  15. Underreporting of nursing home utilization on the CMS-2728 in older incident dialysis patients and implications for assessing mortality risk.

    PubMed

    Bowling, C Barrett; Zhang, Rebecca; Franch, Harold; Huang, Yijian; Mirk, Anna; McClellan, William M; Johnson, Theodore M; Kutner, Nancy G

    2015-03-21

    The usage of nursing home (NH) services is a marker of frailty among older adults. Although the Centers for Medicare & Medicaid Services (CMS) revised the Medical Evidence Report Form CMS-2728 in 2005 to include data collection on NH institutionalization, the validity of this item has not been reported. There were 27,913 patients ≥ 75 years of age with incident end-stage renal disease (ESRD) in 2006, which constituted our analysis cohort. We determined the accuracy of the CMS-2728 using a matched cohort that included the CMS Minimum Data Set (MDS) 2.0, often employed as a "gold standard" metric for identifying patients receiving NH care. We calculated sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for the CMS-2728 NH item. Next, we compared characteristics and mortality risk by CMS-2728 and MDS NH status agreement. The sensitivity, specificity, PPV and NPV of the CMS-2728 for NH status were 33%, 97%, 80% and 79%, respectively. Compared to those without the MDS or CMS-2728 NH indicator (No MDS/No 2728), multivariable adjusted hazard ratios (95% confidence interval) for mortality associated with NH status were 1.55 (1.46 - 1.64) for MDS/2728, 1.48 (1.42 - 1.54) for MDS/No 2728, and 1.38 (1.25 - 1.52) for No MDS/2728. NH utilization was more strongly associated with mortality than other CMS-2728 items in the model. The CMS-2728 underestimated NH utilization among older adults with incident ESRD. The potential for misclassification may have important ramifications for assessing prognosis, developing advanced care plans and providing coordinated care.

  16. Building Tier 3 Intervention for Long-Term Slow Growers in Grades 3-4: A Pilot Study

    ERIC Educational Resources Information Center

    Sanchez, Victoria M.; O'Connor, Rollanda E.

    2015-01-01

    Tier 3 interventions are necessary for improving the reading performance of students who fail to respond adequately to Tier 1 general education instruction and Tier 2 supplemental reading intervention. In this pilot study, we identified 8 students in 3rd and 4th grade who had demonstrated slow response to Tier 2 reading interventions for three…

  17. Computing Fiber/Matrix Interfacial Effects In SiC/RBSN

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Hopkins, Dale A.

    1996-01-01

    Computational study conducted to demonstrate use of boundary-element method in analyzing effects of fiber/matrix interface on elastic and thermal behaviors of representative laminated composite materials. In study, boundary-element method implemented by Boundary Element Solution Technology - Composite Modeling System (BEST-CMS) computer program.

  18. Low extracellular potassium prolongs repolarization and evokes early afterdepolarization in human induced pluripotent stem cell-derived cardiomyocytes.

    PubMed

    Kuusela, Jukka; Larsson, Kim; Shah, Disheet; Prajapati, Chandra; Aalto-Setälä, Katriina

    2017-06-15

    Long QT syndrome (LQTS) is characterized by a prolonged QT-interval on electrocardiogram and by increased risk of sudden death. One of the most common and potentially life-threatening electrolyte disturbances is hypokalemia, characterized by low concentrations of K + Using a multielectrode array platform and current clamp technique, we investigated the effect of low extracellular K + concentration ([K + ] Ex ) on the electrophysiological properties of hiPSC-derived cardiomyocytes (CMs) generated from a healthy control subject (WT) and from two symptomatic patients with type 1 of LQTS carrying G589D (LQT1A) or IVS7-2A>G mutation (LQT1B) in KCNQ1 The baseline prolongations of field potential durations (FPDs) and action potential durations (APDs) were longer in LQT1-CMs than in WT-CMs. Exposure to low [K + ] Ex prolonged FPDs and APDs in a concentration-dependent fashion. LQT1-CMs were found to be more sensitive to low [K + ] Ex compared to WT-CMs. At baseline, LQT1A-CMs had more prolonged APDs than LQT1B-CMs, but low [K + ] Ex caused more pronounced APD prolongation in LQT1B-CMs. Early afterdepolarizations in the action potentials were observed in a subset of LQT1A-CMs with further prolonged baseline APDs and triangular phase 2 profiles. This work demonstrates that the hiPSC-derived CMs are sensitive to low [K + ] Ex and provide a platform to study acquired LQTS. © 2017. Published by The Company of Biologists Ltd.

  19. Examining the Effects and Feasibility of a Teacher-Implemented Tier 1 and Tier 2 Intervention in Word Reading, Fluency, and Comprehension

    ERIC Educational Resources Information Center

    Solari, Emily J.; Denton, Carolyn A.; Petscher, Yaacov; Haring, Christa

    2018-01-01

    This study investigates the effects and feasibility of an intervention for first-grade students at risk for reading difficulties or disabilities (RD). The intervention was provided by general education classroom teachers and consisted of 15 min whole-class comprehension lessons (Tier 1) and 30 min Tier 2 intervention sessions in word reading,…

  20. 40 CFR 89.205 - Banking.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Provisions § 89.205 Banking. (a) Requirements for Tier 1 engines rated at or above 37 kW. (1) A manufacturer... from Tier 1 engines under the provisions specified in § 89.207(b) for use in averaging and trading in... Tier 1 and later engines rated under 37 kW. (1) A manufacturer of a nonroad engine family with an NMHC...

  1. Storing files in a parallel computing system based on user or application specification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faibish, Sorin; Bent, John M.; Nick, Jeffrey M.

    2016-03-29

    Techniques are provided for storing files in a parallel computing system based on a user-specification. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a specification from the distributed application indicating how the plurality of files should be stored; and storing one or more of the plurality of files in one or more storage nodes of a multi-tier storage system based on the specification. The plurality of files comprise a plurality of complete files and/or a plurality of sub-files. The specification can optionally be processed by a daemon executing on onemore » or more nodes in a multi-tier storage system. The specification indicates how the plurality of files should be stored, for example, identifying one or more storage nodes where the plurality of files should be stored.« less

  2. 6 CFR 27.220 - Tiering.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 6 Domestic Security 1 2010-01-01 2010-01-01 false Tiering. 27.220 Section 27.220 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY CHEMICAL FACILITY ANTI-TERRORISM STANDARDS Chemical Facility Security Program § 27.220 Tiering. (a) Preliminary Determination of Risk-Based Tiering. Based on...

  3. 12 CFR 3.3 - Transitional rules.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the OCC, need not be deducted from Tier 1 capital until December 31, 1992. However, when combined with other qualifying intangible assets, these intangibles may not exceed 25 percent of Tier 1 capital. After... appendix A will not be deducted from Tier 1 capital. [55 FR 38800, Sept. 21, 1990] ...

  4. A Two-Tier Test-Based Approach to Improving Students' Computer-Programming Skills in a Web-Based Learning Environment

    ERIC Educational Resources Information Center

    Yang, Tzu-Chi; Hwang, Gwo-Jen; Yang, Stephen J. H.; Hwang, Gwo-Haur

    2015-01-01

    Computer programming is an important skill for engineering and computer science students. However, teaching and learning programming concepts and skills has been recognized as a great challenge to both teachers and students. Therefore, the development of effective learning strategies and environments for programming courses has become an important…

  5. 76 FR 11381 - Magnuson-Stevens Act Provisions; Fisheries Off West Coast States; Pacific Coast Groundfish...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-02

    ... calculate the fixed gear primary sablefish fishery tier limits for 2011 at a level that will reduce concerns..., 2011, NMFS is implementing the following decrease in the annual tier limits for sablefish for 2011 and beyond: From Tier 1 at 56,081-lb (25,437 kg), Tier 2 at 25,492-lb (11,562 kg), and Tier 3 at 14,567-lb (6...

  6. On the escape of oxygen and hydrogen from Mars

    NASA Technical Reports Server (NTRS)

    Fox, J. L.

    1993-01-01

    Escape rates of oxygen atoms from dissociative recombination of O2(+) above the Martian exobase are computed in light of new information from ab initio calculations of the dissociative recombination process and our recently revised understanding of the Martian dayside ionosphere. Only about 60 percent of the dissociative recombinations occur in channels in which the O atoms are released with energies in excess of the escape velocity. Futhermore, we find that the computed escape fluxes for O depend greatly on the nature of the ion loss process that has been found necessary to reproduce the topside ion density profiles measured by Viking. If it is assumed that the ions are not lost from the gravitational field of the planet, as required by an analysis of nitrogen escape, the computed average O escape rate is 3 x 10 exp 6/sq cm/s, much less than half the H escape rates inferred from measurements of the Lyman-alpha dayglow, which are in the range (1-2) x 10 exp 8/sq cm/s. Suggestions for restoring the relative escape rates of H and O to the stoichiometric ratio of water are explored.

  7. 26 CFR 1.1503-2 - Dual consolidated loss.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... tiers of separate units. If a separate unit of a domestic corporation is owned indirectly through... upper-tier separate unit were a subsidiary of the domestic corporation and the lower-tier separate unit were a lower-tier subsidiary. (4) Examples. The following examples illustrate the application of this...

  8. 78 FR 48169 - Privacy Act of 1974; CMS Computer Match No. 2013-02; HHS Computer Match No. 1306; DoD-DMDC Match...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-07

    ...), Defense Manpower Data Center (DMDC) and the Office of the Assistant Secretary of Defense (Health Affairs.../TRICARE. DMDC will receive the results of the computer match and provide the information to TMA for use in...

  9. 75 FR 54162 - Privacy Act of 1974

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-03

    ... Program A. General The Computer Matching and Privacy Protection Act of 1988 (Pub. L. 100-503), amended the... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Medicare and Medicaid Services [CMS Computer Match No. 2010-01; HHS Computer Match No. 1006] Privacy Act of 1974 AGENCY: Department of Health and...

  10. 78 FR 69926 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Centers for Medicare & Medicaid...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-21

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2013-0059] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Centers for Medicare & Medicaid Services (CMS))--Match Number 1076 AGENCY: Social Security Administration (SSA). ACTION: Notice of a renewal of an existing computer matching...

  11. 76 FR 21091 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Centers for Medicare & Medicaid...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-14

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2011-0022] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Centers for Medicare & Medicaid Services (CMS))--Match Number 1076 AGENCY: Social Security Administration (SSA). ACTION: Notice of a renewal of an existing computer matching...

  12. Service Migration from Cloud to Multi-tier Fog Nodes for Multimedia Dissemination with QoE Support

    PubMed Central

    Camargo, João; Rochol, Juergen; Gerla, Mario

    2018-01-01

    A wide range of multimedia services is expected to be offered for mobile users via various wireless access networks. Even the integration of Cloud Computing in such networks does not support an adequate Quality of Experience (QoE) in areas with high demands for multimedia contents. Fog computing has been conceptualized to facilitate the deployment of new services that cloud computing cannot provide, particularly those demanding QoE guarantees. These services are provided using fog nodes located at the network edge, which is capable of virtualizing their functions/applications. Service migration from the cloud to fog nodes can be actuated by request patterns and the timing issues. To the best of our knowledge, existing works on fog computing focus on architecture and fog node deployment issues. In this article, we describe the operational impacts and benefits associated with service migration from the cloud to multi-tier fog computing for video distribution with QoE support. Besides that, we perform the evaluation of such service migration of video services. Finally, we present potential research challenges and trends. PMID:29364172

  13. Service Migration from Cloud to Multi-tier Fog Nodes for Multimedia Dissemination with QoE Support.

    PubMed

    Rosário, Denis; Schimuneck, Matias; Camargo, João; Nobre, Jéferson; Both, Cristiano; Rochol, Juergen; Gerla, Mario

    2018-01-24

    A wide range of multimedia services is expected to be offered for mobile users via various wireless access networks. Even the integration of Cloud Computing in such networks does not support an adequate Quality of Experience (QoE) in areas with high demands for multimedia contents. Fog computing has been conceptualized to facilitate the deployment of new services that cloud computing cannot provide, particularly those demanding QoE guarantees. These services are provided using fog nodes located at the network edge, which is capable of virtualizing their functions/applications. Service migration from the cloud to fog nodes can be actuated by request patterns and the timing issues. To the best of our knowledge, existing works on fog computing focus on architecture and fog node deployment issues. In this article, we describe the operational impacts and benefits associated with service migration from the cloud to multi-tier fog computing for video distribution with QoE support. Besides that, we perform the evaluation of such service migration of video services. Finally, we present potential research challenges and trends.

  14. IK1-enhanced human-induced pluripotent stem cell-derived cardiomyocytes: an improved cardiomyocyte model to investigate inherited arrhythmia syndromes

    PubMed Central

    Vaidyanathan, Ravi; Markandeya, Yogananda S.; Kamp, Timothy J.; Makielski, Jonathan C.; January, Craig T.

    2016-01-01

    Currently available induced pluripotent stem cell-derived cardiomyocytes (iPS-CMs) do not ideally model cellular mechanisms of human arrhythmic disease due to lack of a mature action potential (AP) phenotype. In this study, we create and characterize iPS-CMs with an electrically mature AP induced by potassium inward rectifier (IK1) enhancement. The advantages of IK1-enhanced iPS-CMs include the absence of spontaneous beating, stable resting membrane potentials at approximately −80 mV and capability for electrical pacing. Compared with unenhanced, IK1-enhanced iPS-CMs calcium transient amplitudes were larger (P < 0.05) with a typical staircase pattern. IK1-enhanced iPS-CMs demonstrated a twofold increase in cell size and membrane capacitance and increased DNA synthesis compared with control iPS-CMs (P < 0.05). Furthermore, IK1-enhanced iPS-CMs expressing the F97C-CAV3 long QT9 mutation compared with wild-type CAV3 demonstrated an increase in AP duration and late sodium current. IK1-enhanced iPS-CMs represent a more mature cardiomyocyte model to study arrhythmia mechanisms. PMID:27059077

  15. 47 CFR 76.980 - Charges for customer changes.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....980 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES... charge for customer changes in service tiers effected solely by coded entry on a computer terminal or by... involve more than coded entry on a computer or other similarly simple method shall be based on actual cost...

  16. 47 CFR 76.980 - Charges for customer changes.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ....980 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES... charge for customer changes in service tiers effected solely by coded entry on a computer terminal or by... involve more than coded entry on a computer or other similarly simple method shall be based on actual cost...

  17. 47 CFR 76.980 - Charges for customer changes.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....980 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES... charge for customer changes in service tiers effected solely by coded entry on a computer terminal or by... involve more than coded entry on a computer or other similarly simple method shall be based on actual cost...

  18. 47 CFR 76.980 - Charges for customer changes.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....980 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES... charge for customer changes in service tiers effected solely by coded entry on a computer terminal or by... involve more than coded entry on a computer or other similarly simple method shall be based on actual cost...

  19. 47 CFR 76.980 - Charges for customer changes.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....980 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES... charge for customer changes in service tiers effected solely by coded entry on a computer terminal or by... involve more than coded entry on a computer or other similarly simple method shall be based on actual cost...

  20. A novel photo-grafting of acrylamide onto carboxymethyl starch. 1. Utilization of CMS-g-PAAm in easy care finishing of cotton fabrics.

    PubMed

    El-Sheikh, Manal A

    2016-11-05

    The photosensitized grafting of vinyl monomers onto a range of polymeric substrates has been the subject of particular interest in the recent past. Carboxymethyl starch (CMS)-poly acrylamide (PAAm) graft copolymer (CMS-g-PAAm) with high graft yield was successfully prepared by grafting of acrylamide onto CMS using UV irradiation in the presence of the water soluble 4-(trimethyl ammoniummethyl) benzophenone chloride photoinitiator. CMS-g-PAAm with nitrogen content of 8.3% and grafting efficiency up to 98.9% was obtained using 100% AAm, a material: liquor ratio of 1:14 and 1% photinitiator at 30°C for 1h of UV irradiation. The synthesis of CMS-g-PAAm was confirmed by FTIR and Nitrogen content (%). Surface morphology of CMS and surface morphological changes of CMS after grafting with AAm were studied using SEM. Thermal properties of both CMS and CMS-g-PAAm were studied using TGA and DSC. To impart easy care finishing to cotton fabrics, aqueous formulations of: CMS-g-PAAm, dimethylol dihydroxy ethylene urea (DMDHEU), CMS-g-PAAm-DMDHEU mixture or methylolated CMS-g-PAAm were used. Cotton fabrics were padded in these formulations, squeezed to a wet pick up 100%, dried at 100°C for 5min, cured at 150°C for 5min, washed at 50°C for 10min and air-dried. CRA (crease recovery angle) of untreated fabrics and fabrics finished with a mixture of 2% CMS-g-PAAm and 10% DMDHEU or methylolated CMS-g-PAAm (10% formaldehyde) were: 136°, 190°, 288° respectively. Increasing the number of washing cycles up to five cycles results in an insignificant decrease in the CRA and a significant decrease in RF (releasable formaldehyde) of finished fabric samples. The morphologies of the finished and unfinished cotton fabrics were performed by SEM. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. 78 FR 68108 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-13

    ... for Low-Volume Issues November 6, 2013. Pursuant to Section 19(b)(1) \\1\\ of the Securities Exchange... 100,000 1,500 The Exchange introduced the current lowest-volume LMM rights fee tier on October 1, 2013...-volume LMM rights fee tier, beginning November 1, 2013 the fee for the next highest tier would apply...

  2. The Use of Proxy Caches for File Access in a Multi-Tier Grid Environment

    NASA Astrophysics Data System (ADS)

    Brun, R.; Duellmann, D.; Ganis, G.; Hanushevsky, A.; Janyst, L.; Peters, A. J.; Rademakers, F.; Sindrilaru, E.

    2011-12-01

    The use of proxy caches has been extensively studied in the HEP environment for efficient access of database data and showed significant performance with only very moderate operational effort at higher grid tiers (T2, T3). In this contribution we propose to apply the same concept to the area of file access and analyse the possible performance gains, operational impact on site services and applicability to different HEP use cases. Base on a proof-of-concept studies with a modified XROOT proxy server we review the cache efficiency and overheads for access patterns of typical ROOT based analysis programs. We conclude with a discussion of the potential role of this new component at the different tiers of a distributed computing grid.

  3. The Use of Proxy Caches for File Access in a Multi-Tier Grid Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brun, R.; Dullmann, D.; Ganis, G.

    2012-04-19

    The use of proxy caches has been extensively studied in the HEP environment for efficient access of database data and showed significant performance with only very moderate operational effort at higher grid tiers (T2, T3). In this contribution we propose to apply the same concept to the area of file access and analyze the possible performance gains, operational impact on site services and applicability to different HEP use cases. Base on a proof-of-concept studies with a modified XROOT proxy server we review the cache efficiency and overheads for access patterns of typical ROOT based analysis programs. We conclude with amore » discussion of the potential role of this new component at the different tiers of a distributed computing grid.« less

  4. The effect of cigarette prices on brand-switching in China: a longitudinal analysis of data from the ITC China Survey

    PubMed Central

    White, Justin S; Li, Jing; Hu, Teh-wei; Fong, Geoffrey T; Jiang, Yuan

    2014-01-01

    Background Recent studies have found that Chinese smokers are relatively unresponsive to cigarette prices. As the Chinese government contemplates higher tobacco taxes, it is important to understand the reasons for this low response. One possible explanation is that smokers buffer themselves from rising cigarette prices by switching to cheaper cigarette brands. Objective This study examines how cigarette prices influence consumers’ choices of cigarette brands in China. Methods This study uses panel data from the first three waves of the International Tobacco Control China Survey, drawn from six large cities in China and collected between 2006 and 2009. The study sample includes 3477 smokers who are present in at least two waves (8552 person-years). Cigarette brands are sorted by price into four tiers, using excise tax categories to determine the cut-off for each tier. The analysis relies on a conditional logit model to identify the relationship between price and brand choice. Findings Overall, 38% of smokers switched price tiers from one wave to the next. A ¥1 change in the price of cigarettes alters the tier choice of 4–7% of smokers. Restricting the sample to those who chose each given tier at baseline, a ¥1 increase in price in a given tier would decrease the share choosing that tier by 4% for Tier 1 and 1–2% for Tiers 2 and 3. Conclusions China's large price spread across cigarette brands appears to alter the brand selection of some consumers, especially smokers of cheaper brands. Tobacco pricing and tax policy can influence consumers’ incentives to switch brands. In particular, whereas ad valorem taxes in a tiered pricing system like China's encourage trading down, specific excise taxes discourage the practice. PMID:23697645

  5. 40 CFR 89.207 - Credit calculation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Trading Provisions § 89.207 Credit calculation. (a) Requirements for calculating NO X credits from Tier 1...) × (Volume) × (AvgPR) × (UL) × (10−6) Where: Std = the applicable Tier 1 NOX nonroad engine emission standard...) of this section, to be applied to Tier 1 NOX credits to be banked or traded for determining...

  6. 75 FR 72719 - Approval and Promulgation of Implementation Plans; Idaho

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-26

    ..., provisions relating to Tier 1 operating permits, facility emissions cap, standards of performance of certain... ENVIRONMENTAL PROTECTION AGENCY 40 CFR Part 52 [EPA-R10-OAR-2008-0482; FRL-9231-1] Approval and... Requirements 7/1/2002 for Tier II Operating Permits. 401 Tier II Operating Permit...... 4/6/2005 Except 401.01...

  7. Matrigel Mattress: A Method for the Generation of Single Contracting Human-Induced Pluripotent Stem Cell-Derived Cardiomyocytes.

    PubMed

    Feaster, Tromondae K; Cadar, Adrian G; Wang, Lili; Williams, Charles H; Chun, Young Wook; Hempel, Jonathan E; Bloodworth, Nathaniel; Merryman, W David; Lim, Chee Chew; Wu, Joseph C; Knollmann, Björn C; Hong, Charles C

    2015-12-04

    The lack of measurable single-cell contractility of human-induced pluripotent stem cell-derived cardiac myocytes (hiPSC-CMs) currently limits the utility of hiPSC-CMs for evaluating contractile performance for both basic research and drug discovery. To develop a culture method that rapidly generates contracting single hiPSC-CMs and allows quantification of cell shortening with standard equipment used for studying adult CMs. Single hiPSC-CMs were cultured for 5 to 7 days on a 0.4- to 0.8-mm thick mattress of undiluted Matrigel (mattress hiPSC-CMs) and compared with hiPSC-CMs maintained on a control substrate (<0.1-mm thick 1:60 diluted Matrigel, control hiPSC-CMs). Compared with control hiPSC-CMs, mattress hiPSC-CMs had more rod-shape morphology and significantly increased sarcomere length. Contractile parameters of mattress hiPSC-CMs measured with video-based edge detection were comparable with those of freshly isolated adult rabbit ventricular CMs. Morphological and contractile properties of mattress hiPSC-CMs were consistent across cryopreserved hiPSC-CMs generated independently at another institution. Unlike control hiPSC-CMs, mattress hiPSC-CMs display robust contractile responses to positive inotropic agents, such as myofilament calcium sensitizers. Mattress hiPSC-CMs exhibit molecular changes that include increased expression of the maturation marker cardiac troponin I and significantly increased action potential upstroke velocity because of a 2-fold increase in sodium current (INa). The Matrigel mattress method enables the rapid generation of robustly contracting hiPSC-CMs and enhances maturation. This new method allows quantification of contractile performance at the single-cell level, which should be valuable to disease modeling, drug discovery, and preclinical cardiotoxicity testing. © 2015 American Heart Association, Inc.

  8. 40 CFR 79.54 - Tier 3.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... emission control equipment. (3) A manufacturer or group may be required to conduct biological and/or... Requiring Tier 3 Testing. (1) Tier 3 testing shall be required of a manufacturer or group of manufacturers... products. Tier 3 testing may be conducted either on an individual basis or a group basis. If performed on a...

  9. 40 CFR 79.54 - Tier 3.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... emission control equipment. (3) A manufacturer or group may be required to conduct biological and/or... Requiring Tier 3 Testing. (1) Tier 3 testing shall be required of a manufacturer or group of manufacturers... products. Tier 3 testing may be conducted either on an individual basis or a group basis. If performed on a...

  10. 40 CFR 79.54 - Tier 3.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... emission control equipment. (3) A manufacturer or group may be required to conduct biological and/or... Requiring Tier 3 Testing. (1) Tier 3 testing shall be required of a manufacturer or group of manufacturers... products. Tier 3 testing may be conducted either on an individual basis or a group basis. If performed on a...

  11. 40 CFR 79.54 - Tier 3.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... emission control equipment. (3) A manufacturer or group may be required to conduct biological and/or... Requiring Tier 3 Testing. (1) Tier 3 testing shall be required of a manufacturer or group of manufacturers... products. Tier 3 testing may be conducted either on an individual basis or a group basis. If performed on a...

  12. 40 CFR 79.54 - Tier 3.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... emission control equipment. (3) A manufacturer or group may be required to conduct biological and/or... Requiring Tier 3 Testing. (1) Tier 3 testing shall be required of a manufacturer or group of manufacturers... products. Tier 3 testing may be conducted either on an individual basis or a group basis. If performed on a...

  13. A Systematic Review of the Empirical Support for Check-in Check-Out

    ERIC Educational Resources Information Center

    Wolfe, Katie; Pyle, Daniel; Charlton, Cade T.; Sabey, Christian V.; Lund, Emily M.; Ross, Scott W.

    2016-01-01

    Tier 2 interventions play an important role within the Positive Behavioral Interventions and Supports framework, bridging the gap between schoolwide Tier 1 interventions and individualized Tier 3 supports. Check-in Check-out (CICO) is a promising Tier 2 intervention for addressing mild problem behavior and potentially preventing the need for more…

  14. 75 FR 38566 - Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-02

    ...- Demutualization Trading Permits, Tier Appointment and Bandwidth Packets June 25, 2010. Pursuant to Section 19(b)(1...-demutualization Trading Permits, tier appointment and bandwidth packets. The text of the proposed rule change is..., tier appointment and bandwidth packets. These post-demutualization Trading Permits, tier appointment...

  15. SWPBIS Tiered Fidelity Inventory. Version 2.1

    ERIC Educational Resources Information Center

    Algozzine, B.; Barrett, S.; Eber, L.; George, H.; Horner, R.; Lewis, T.; Putnam, B.; Swain-Bradway, J.; McIntosh, K.; Sugai, G.

    2014-01-01

    The purpose of the SWPBIS Tiered Fidelity Inventory (TFI) is to provide a valid, reliable, and efficient measure of the extent to which school personnel are applying the core features of school-wide positive behavioral interventions and supports (SWPBIS). The TFI is divided into three sections (Tier I: Universal SWPBIS Features; Tier II: Targeted…

  16. Identification of mitochondrial DNA sequence variation and development of single nucleotide polymorphic markers for CMS-D8 in cotton.

    PubMed

    Suzuki, Hideaki; Yu, Jiwen; Wang, Fei; Zhang, Jinfa

    2013-06-01

    Cytoplasmic male sterility (CMS), which is a maternally inherited trait and controlled by novel chimeric genes in the mitochondrial genome, plays a pivotal role in the production of hybrid seed. In cotton, no PCR-based marker has been developed to discriminate CMS-D8 (from Gossypium trilobum) from its normal Upland cotton (AD1, Gossypium hirsutum) cytoplasm. The objective of the current study was to develop PCR-based single nucleotide polymorphic (SNP) markers from mitochondrial genes for the CMS-D8 cytoplasm. DNA sequence variation in mitochondrial genes involved in the oxidative phosphorylation chain including ATP synthase subunit 1, 4, 6, 8 and 9, and cytochrome c oxidase 1, 2 and 3 subunits were identified by comparing CMS-D8, its isogenic maintainer and restorer lines on the same nuclear genetic background. An allelic specific PCR (AS-PCR) was utilized for SNP typing by incorporating artificial mismatched nucleotides into the third or fourth base from the 3' terminus in both the specific and nonspecific primers. The result indicated that the method modifying allele-specific primers was successful in obtaining eight SNP markers out of eight SNPs using eight primer pairs to discriminate two alleles between AD1 and CMS-D8 cytoplasms. Two of the SNPs for atp1 and cox1 could also be used in combination to discriminate between CMS-D8 and CMS-D2 cytoplasms. Additionally, a PCR-based marker from a nine nucleotide insertion-deletion (InDel) sequence (AATTGTTTT) at the 59-67 bp positions from the start codon of atp6, which is present in the CMS and restorer lines with the D8 cytoplasm but absent in the maintainer line with the AD1 cytoplasm, was also developed. A SNP marker for two nucleotide substitutions (AA in AD1 cytoplasm to CT in CMS-D8 cytoplasm) in the intron (1,506 bp) of cox2 gene was also developed. These PCR-based SNP markers should be useful in discriminating CMS-D8 and AD1 cytoplasms, or those with CMS-D2 cytoplasm as a rapid, simple, inexpensive, and reliable genotyping tool to assist hybrid cotton breeding.

  17. Single-Tier Testing with the C6 Peptide ELISA Kit Compared with Two-Tier Testing for Lyme Disease

    PubMed Central

    Wormser, Gary P.; Schriefer, Martin; Aguero-Rosenfeld, Maria E.; Levin, Andrew; Steere, Allen C.; Nadelman, Robert B.; Nowakowski, John; Marques, Adriana; Johnson, Barbara J. B.; Dumler, J. Stephen

    2014-01-01

    Background The two-tier serologic testing protocol for Lyme disease has a number of shortcomings including low sensitivity in early disease; increased cost, time and labor; and subjectivity in the interpretation of immunoblots. Methods The diagnostic accuracy of a single-tier commercial C6 ELISA kit was compared with two-tier testing. Results The C6 ELISA was significantly more sensitive than two-tier testing with sensitivities of 66.5% (95% C.I.:61.7-71.1) and 35.2% (95%C.I.:30.6-40.1), respectively (p<0.001) in 403 sera from patients with erythema migrans. The C6 ELISA had sensitivity statistically comparable to two-tier testing in sera from Lyme disease patients with early neurological manifestations (88.6% vs. 77.3%, p=0.13) or arthritis (98.3% vs. 95.6%, p= 0.38). Te specificities of C6 ELISA and two-tier testing in over 2200 blood donors, patients with other conditions, and Lyme disease vaccine recipients were found to be 98.9% and 99.5%, respectively (p<0.05, 95% C.I. surrounding the 0.6 percentage point difference of 0.04 to 1.15). Conclusions Using a reference standard of two-tier testing, the C6 ELISA as a single step serodiagnostic test provided increased sensitivity in early Lyme disease with comparable sensitivity in later manifestations of Lyme disease. The C6 ELISA had slightly decreased specificity. Future studies should evaluate the performance of the C6 ELISA compared with two-tier testing in routine clinical practice. PMID:23062467

  18. 12 CFR 208.73 - What additional provisions are applicable to state member banks with financial subsidiaries?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... subsidiaries from both the bank's Tier 1 capital and Tier 2 capital; and (ii) Deduct the entire amount of the... deducted from the bank's Tier 1 capital. (b) Financial statement disclosure of capital deduction. Any... (including the well capitalized standard of § 208.71(a)(1)): (1) The bank must not consolidate the assets and...

  19. Influence of off-stoichiometry on magnetoresistance characteristics of Co2MnSi/Ag-based current-perpendicular-to-plane spin valves

    NASA Astrophysics Data System (ADS)

    Inoue, Masaki; Hu, Bing; Moges, Kidist; Inubushi, Kazuumi; Nakada, Katsuyuki; Yamamoto, Masafumi; Uemura, Tetsuya

    2017-08-01

    The influence of off-stoichiometry of Co2MnSi (CMS) spin sources on giant magnetoresistance characteristics was investigated for CMS/Ag-based current-perpendicular-to-plane spin valves prepared with various Mn compositions α in Co2MnαSi0.82 electrodes. The magnetoresistance ratio of the prepared CMS/Co50Fe50 (CoFe) (1.1 nm)/Ag/CoFe (1.1)/CMS spin valves systematically increased with α from 11.4% for Mn-deficient α = 0.62 to 20.7% for Mn-rich α = 1.45 at 290 K. This result suggests that increasing α from a Mn-deficient to Mn-rich value increases the spin polarization by suppressing CoMn antisites harmful to the half-metallicity. Thus, our results demonstrate that appropriately controlling the film composition toward a Mn-rich one is highly effective for enhancing the half-metallicity of CMS in CMS-based spin valves, as it is in CMS-based magnetic tunnel junctions.

  20. Application of Computational Toxicological Approaches in Supporting Human Health Risk Assessment, Project Summary

    EPA Science Inventory

    Summary

    This project has three parts. The first part focuses on developing a tiered strategy and applying computational toxicological approaches to support human health risk assessment by deriving a surrogate point-of-departure (e.g., NOAEL, LOAEL, etc.) using a test c...

  1. Prioritizing pesticide compounds for analytical methods development

    USGS Publications Warehouse

    Norman, Julia E.; Kuivila, Kathryn; Nowell, Lisa H.

    2012-01-01

    The U.S. Geological Survey (USGS) has a periodic need to re-evaluate pesticide compounds in terms of priorities for inclusion in monitoring and studies and, thus, must also assess the current analytical capabilities for pesticide detection. To meet this need, a strategy has been developed to prioritize pesticides and degradates for analytical methods development. Screening procedures were developed to separately prioritize pesticide compounds in water and sediment. The procedures evaluate pesticide compounds in existing USGS analytical methods for water and sediment and compounds for which recent agricultural-use information was available. Measured occurrence (detection frequency and concentrations) in water and sediment, predicted concentrations in water and predicted likelihood of occurrence in sediment, potential toxicity to aquatic life or humans, and priorities of other agencies or organizations, regulatory or otherwise, were considered. Several existing strategies for prioritizing chemicals for various purposes were reviewed, including those that identify and prioritize persistent, bioaccumulative, and toxic compounds, and those that determine candidates for future regulation of drinking-water contaminants. The systematic procedures developed and used in this study rely on concepts common to many previously established strategies. The evaluation of pesticide compounds resulted in the classification of compounds into three groups: Tier 1 for high priority compounds, Tier 2 for moderate priority compounds, and Tier 3 for low priority compounds. For water, a total of 247 pesticide compounds were classified as Tier 1 and, thus, are high priority for inclusion in analytical methods for monitoring and studies. Of these, about three-quarters are included in some USGS analytical method; however, many of these compounds are included on research methods that are expensive and for which there are few data on environmental samples. The remaining quarter of Tier 1 compounds are high priority as new analytes. The objective for analytical methods development is to design an integrated analytical strategy that includes as many of the Tier 1 pesticide compounds as possible in a relatively few, cost-effective methods. More than 60 percent of the Tier 1 compounds are high priority because they are anticipated to be present at concentrations approaching levels that could be of concern to human health or aquatic life in surface water or groundwater. An additional 17 percent of Tier 1 compounds were frequently detected in monitoring studies, but either were not measured at levels potentially relevant to humans or aquatic organisms, or do not have benchmarks available with which to compare concentrations. The remaining 21 percent are pesticide degradates that were included because their parent pesticides were in Tier 1. Tier 1 pesticide compounds for water span all major pesticide use groups and a diverse range of chemical classes, with herbicides and their degradates composing half of compounds. Many of the high priority pesticide compounds also are in several national regulatory programs for water, including those that are regulated in drinking water by the U.S. Environmental Protection Agency under the Safe Drinking Water Act and those that are on the latest Contaminant Candidate List. For sediment, a total of 175 pesticide compounds were classified as Tier 1 and, thus, are high priority for inclusion in analytical methods available for monitoring and studies. More than 60 percent of these compounds are included in some USGS analytical method; however, some are spread across several research methods that are expensive to perform, and monitoring data are not extensive for many compounds. The remaining Tier 1 compounds for sediment are high priority as new analytes. The objective for analytical methods development for sediment is to enhance an existing analytical method that currently includes nearly half of the pesticide compounds in Tier 1 by adding as many additional Tier 1 compounds as are analytically compatible. About 35 percent of the Tier 1 compounds for sediment are high priority on the basis of measured occurrence. A total of 74 compounds, or 42 percent, are high priority on the basis of predicted likelihood of occurrence according to physical-chemical properties, and either have potential toxicity to aquatic life, high pesticide useage, or both. The remaining 22 percent of Tier 1 pesticide compounds were either degradates of Tier 1 parent compounds or included for other reasons. As with water, the Tier 1 pesticide compounds for sediment are distributed across the major pesticide-use groups; insecticides and their degradates are the largest fraction, making up 45 percent of Tier 1. In contrast to water, organochlorines, at 17 percent, are the largest chemical class for Tier 1 in sediment, which is to be expected because there is continued widespread detection in sediments of persistent organochlorine pesticides and their degradates at concentrations high enough for potential effects on aquatic life. Compared to water, there are fewer available benchmarks with which to compare contaminant concentrations in sediment, but a total of 19 Tier 1 compounds have at least one sediment benchmark or screening value for aquatic organisms. Of the 175 compounds in Tier 1, 77 percent have high aquatic-life toxicity, as defined for this process. This evaluation of pesticides and degradates resulted in two lists of compounds that are priorities for USGS analytical methods development, one for water and one for sediment. These lists will be used as the basis for redesigning and enhancing USGS analytical capabilities for pesticides in order to capture as many high-priority pesticide compounds as possible using an economically feasible approach.

  2. Exploiting multicore compute resources in the CMS experiment

    NASA Astrophysics Data System (ADS)

    Ramírez, J. E.; Pérez-Calero Yzquierdo, A.; Hernández, J. M.; CMS Collaboration

    2016-10-01

    CMS has developed a strategy to efficiently exploit the multicore architecture of the compute resources accessible to the experiment. A coherent use of the multiple cores available in a compute node yields substantial gains in terms of resource utilization. The implemented approach makes use of the multithreading support of the event processing framework and the multicore scheduling capabilities of the resource provisioning system. Multicore slots are acquired and provisioned by means of multicore pilot agents which internally schedule and execute single and multicore payloads. Multicore scheduling and multithreaded processing are currently used in production for online event selection and prompt data reconstruction. More workflows are being adapted to run in multicore mode. This paper presents a review of the experience gained in the deployment and operation of the multicore scheduling and processing system, the current status and future plans.

  3. Model-free quantification of dynamic PET data using nonparametric deconvolution

    PubMed Central

    Zanderigo, Francesca; Parsey, Ramin V; Todd Ogden, R

    2015-01-01

    Dynamic positron emission tomography (PET) data are usually quantified using compartment models (CMs) or derived graphical approaches. Often, however, CMs either do not properly describe the tracer kinetics, or are not identifiable, leading to nonphysiologic estimates of the tracer binding. The PET data are modeled as the convolution of the metabolite-corrected input function and the tracer impulse response function (IRF) in the tissue. Using nonparametric deconvolution methods, it is possible to obtain model-free estimates of the IRF, from which functionals related to tracer volume of distribution and binding may be computed, but this approach has rarely been applied in PET. Here, we apply nonparametric deconvolution using singular value decomposition to simulated and test–retest clinical PET data with four reversible tracers well characterized by CMs ([11C]CUMI-101, [11C]DASB, [11C]PE2I, and [11C]WAY-100635), and systematically compare reproducibility, reliability, and identifiability of various IRF-derived functionals with that of traditional CMs outcomes. Results show that nonparametric deconvolution, completely free of any model assumptions, allows for estimates of tracer volume of distribution and binding that are very close to the estimates obtained with CMs and, in some cases, show better test–retest performance than CMs outcomes. PMID:25873427

  4. 77 FR 18109 - Assessments, Large Bank Pricing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-27

    ... higher-risk assets to Tier 1 capital and reserves.\\7\\ Higher-risk assets are defined as the sum of... defined as the higher of: (a) The higher-risk assets to Tier 1 capital and reserves score or (b) the... higher of: (a) The higher-risk assets to Tier 1 capital and reserves score or (b) the largest or top 20...

  5. 12 CFR 34.81 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Real Estate Owned § 34.81 Definitions. (a) Capital and surplus means: (1) A bank's Tier 1 and Tier 2... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Definitions. 34.81 Section 34.81 Banks and... (2) The balance of a bank's allowance for loan and lease losses not included in the bank's Tier 2...

  6. 78 FR 35903 - Information Collection Request Submitted to OMB for Review and Approval; Comment Request; ICR...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-14

    ...; Tier 1 Screening of Certain Chemicals Under the Endocrine Disruptor Screening Program AGENCY... Chemicals; Tier 1 Screening of Certain Chemicals Under the Endocrine Disruptor Screening Program (EDSP... effects. The EDSP consists of a two-tiered approach to screen chemicals for potential endocrine disrupting...

  7. 77 FR 33547 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Centers for Medicare and Medicaid...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-06

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2012-0015] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Centers for Medicare and Medicaid Services (CMS))--Match Number 1094 AGENCY: Social Security Administration (SSA). ACTION: Notice of a new computer matching program that will expire...

  8. Low Resting Membrane Potential and Low Inward Rectifier Potassium Currents Are Not Inherent Features of hiPSC-Derived Cardiomyocytes.

    PubMed

    Horváth, András; Lemoine, Marc D; Löser, Alexandra; Mannhardt, Ingra; Flenner, Frederik; Uzun, Ahmet Umur; Neuber, Christiane; Breckwoldt, Kaja; Hansen, Arne; Girdauskas, Evaldas; Reichenspurner, Hermann; Willems, Stephan; Jost, Norbert; Wettwer, Erich; Eschenhagen, Thomas; Christ, Torsten

    2018-03-13

    Human induced pluripotent stem cell (hiPSC) cardiomyocytes (CMs) show less negative resting membrane potential (RMP), which is attributed to small inward rectifier currents (I K1 ). Here, I K1 was measured in hiPSC-CMs (proprietary and commercial cell line) cultured as monolayer (ML) or 3D engineered heart tissue (EHT) and, for direct comparison, in CMs from human right atrial (RA) and left ventricular (LV) tissue. RMP was measured in isolated cells and intact tissues. I K1 density in ML- and EHT-CMs from the proprietary line was similar to LV and RA, respectively. I K1 density in EHT-CMs from the commercial line was 2-fold smaller than in the proprietary line. RMP in EHT of both lines was similar to RA and LV. Repolarization fraction and I K,ACh response discriminated best between RA and LV and indicated predominantly ventricular phenotype in hiPSC-CMs/EHT. The data indicate that I K1 is not necessarily low in hiPSC-CMs, and technical issues may underlie low RMP in hiPSC-CMs. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  9. Human induced pluripotent stem cell‐derived versus adult cardiomyocytes: an in silico electrophysiological study on effects of ionic current block

    PubMed Central

    Paci, M; Hyttinen, J; Rodriguez, B

    2015-01-01

    Background and Purpose Two new technologies are likely to revolutionize cardiac safety and drug development: in vitro experiments on human‐induced pluripotent stem cell‐derived cardiomyocytes (hiPSC‐CMs) and in silico human adult ventricular cardiomyocyte (hAdultV‐CM) models. Their combination was recently proposed as a potential replacement for the present hERG‐based QT study for pharmacological safety assessments. Here, we systematically compared in silico the effects of selective ionic current block on hiPSC‐CM and hAdultV‐CM action potentials (APs), to identify similarities/differences and to illustrate the potential of computational models as supportive tools for evaluating new in vitro technologies. Experimental Approach In silico AP models of ventricular‐like and atrial‐like hiPSC‐CMs and hAdultV‐CM were used to simulate the main effects of four degrees of block of the main cardiac transmembrane currents. Key Results Qualitatively, hiPSC‐CM and hAdultV‐CM APs showed similar responses to current block, consistent with results from experiments. However, quantitatively, hiPSC‐CMs were more sensitive to block of (i) L‐type Ca2+ currents due to the overexpression of the Na+/Ca2+ exchanger (leading to shorter APs) and (ii) the inward rectifier K+ current due to reduced repolarization reserve (inducing diastolic potential depolarization and repolarization failure). Conclusions and Implications In silico hiPSC‐CMs and hAdultV‐CMs exhibit a similar response to selective current blocks. However, overall hiPSC‐CMs show greater sensitivity to block, which may facilitate in vitro identification of drug‐induced effects. Extrapolation of drug effects from hiPSC‐CM to hAdultV‐CM and pro‐arrhythmic risk assessment can be facilitated by in silico predictions using biophysically‐based computational models. PMID:26276951

  10. DSCOVR_EPIC_L2_AER_01

    Atmospheric Science Data Center

    2018-04-23

    DSCOVR_EPIC_L2_AER_01 The Aerosol UV product provides aerosol and UV products in three tiers. Tier 1 products include Absorbing Aerosol Index (AAI) and above-cloud-aerosol optical depth (ACAOD). Tier 2 ...

  11. Progress in landslide susceptibility mapping over Europe using Tier-based approaches

    NASA Astrophysics Data System (ADS)

    Günther, Andreas; Hervás, Javier; Reichenbach, Paola; Malet, Jean-Philippe

    2010-05-01

    The European Thematic Strategy for Soil Protection aims, among other objectives, to ensure a sustainable use of soil. The legal instrument of the strategy, the proposed Framework Directive, suggests identifying priority areas of several soil threats including landslides using a coherent and compatible approach based on the use of common thematic data. In a first stage, this can be achieved through landslide susceptibility mapping using geographically nested, multi-step tiered approaches, where areas identified as of high susceptibility by a first, synoptic-scale Tier ("Tier 1") can then be further assessed and mapped at larger scale by successive Tiers. In order to identify areas prone to landslides at European scale ("Tier 1"), a number of thematic terrain and environmental data sets already available for the whole of Europe can be used as input for a continental scale susceptibility model. However, since no coherent landslide inventory data is available at the moment over the whole continent, qualitative heuristic zonation approaches are proposed. For "Tier 1" a preliminary, simplified model has been developed. It consists of an equally weighting combination of a reduced, continent-wide common dataset of landslide conditioning factors including soil parent material, slope angle and land cover, to derive a landslide susceptibility index using raster mapping units consisting of 1 x 1 km pixels. A preliminary European-wide susceptibility map has thus been produced at 1:1 Million scale, since this is compatible with that of the datasets used. The map has been validated by means of a ratio of effectiveness using samples from landslide inventories in Italy, Austria, Hungary and United Kingdom. Although not differentiated for specific geomorphological environments or specific landslide types, the experimental model reveals a relatively good performance in many European regions at a 1:1 Million scale. An additional "Tier 1" susceptibility map at the same scale and using the same or equivalent thematic data as for the one above has been generated for six French departments using a heuristic, weighting-based multi-criteria evaluation model applied also to raster-cell mapping units. In this experiment, thematic data class weights have been differentiated for two stratification areas, namely mountains and plains, and four main landslide types. Separate susceptibility maps for each landslide type and a combined map for all types have been produced. Results have been validated using BRGM's BDMvT landslide inventory. Unlike "Tier 1", "Tier 2" assessment requires landslide inventory data and additional thematic data on conditioning factors which may not be available for all European countries. For the "Tier 2", a nation-wide quantitative landslide susceptibility assessment has been performed for Italy by applying a statistical model. In this assessment, multivariate analysis was applied using bedrock, soil and climate data together with a number of derivatives from SRTM90 DEM. In addition, separate datasets from a historical landslide inventory were used for model training and validation respectively. The mapping units selected were based on administrative boundaries (municipalities). The performance of this nation-wide, quantitative susceptibility assessment has been evaluated using multi-temporal landslide inventory data. Finally, model limitations for "Tier 1" are discussed, and recommendations for enhanced Tier 1 and Tier 2 models including additional thematic data for conditioning factors are drawn. This project is part of the collaborative research carried out within the European Landslide Expert Group coordinated by JRC in support to the EU Soil Thematic Strategy. It is also supported by the International Programme on Landslides of the International Consortium on Landslides.

  12. 12 CFR 347.111 - Underwriting and dealing limits applicable to foreign organizations held by insured state...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the lesser of $60 million or 25 percent of the bank's Tier 1 capital, except as otherwise provided in..., with at least 50 percent of the deduction being taken from Tier 1 capital, with the bank remaining well...: (1) May not exceed the lesser of $30 million or 5 percent of the bank's Tier 1 capital, subject to...

  13. How to Spot Congenital Myasthenic Syndromes Resembling the Lambert-Eaton Myasthenic Syndrome? A Brief Review of Clinical, Electrophysiological, and Genetics Features.

    PubMed

    Lorenzoni, Paulo José; Scola, Rosana Herminia; Kay, Claudia Suemi Kamoi; Werneck, Lineu Cesar; Horvath, Rita; Lochmüller, Hanns

    2018-06-01

    Congenital myasthenic syndromes (CMS) are heterogeneous genetic diseases in which neuromuscular transmission is compromised. CMS resembling the Lambert-Eaton myasthenic syndrome (CMS-LEMS) are emerging as a rare group of distinct presynaptic CMS that share the same electrophysiological features. They have low compound muscular action potential amplitude that increment after brief exercise (facilitation) or high-frequency repetitive nerve stimulation. Although clinical signs similar to LEMS can be present, the main hallmark is the electrophysiological findings, which are identical to autoimmune LEMS. CMS-LEMS occurs due to deficits in acetylcholine vesicle release caused by dysfunction of different components in its pathway. To date, the genes that have been associated with CMS-LEMS are AGRN, SYT2, MUNC13-1, VAMP1, and LAMA5. Clinicians should keep in mind these newest subtypes of CMS-LEMS to achieve the correct diagnosis and therapy. We believe that CMS-LEMS must be included as an important diagnostic clue to genetic investigation in the diagnostic algorithms to CMS. We briefly review the main features of CMS-LEMS.

  14. Evaluation of Flagging Criteria of United States Kidney Transplant Center Performance: How to Best Define Outliers?

    PubMed

    Schold, Jesse D; Miller, Charles M; Henry, Mitchell L; Buccini, Laura D; Flechner, Stuart M; Goldfarb, David A; Poggio, Emilio D; Andreoni, Kenneth A

    2017-06-01

    Scientific Registry of Transplant Recipients report cards of US organ transplant center performance are publicly available and used for quality oversight. Low center performance (LP) evaluations are associated with changes in practice including reduced transplant rates and increased waitlist removals. In 2014, Scientific Registry of Transplant Recipients implemented new Bayesian methodology to evaluate performance which was not adopted by Center for Medicare and Medicaid Services (CMS). In May 2016, CMS altered their performance criteria, reducing the likelihood of LP evaluations. Our aims were to evaluate incidence, survival rates, and volume of LP centers with Bayesian, historical (old-CMS) and new-CMS criteria using 6 consecutive program-specific reports (PSR), January 2013 to July 2015 among adult kidney transplant centers. Bayesian, old-CMS and new-CMS criteria identified 13.4%, 8.3%, and 6.1% LP PSRs, respectively. Over the 3-year period, 31.9% (Bayesian), 23.4% (old-CMS), and 19.8% (new-CMS) of centers had 1 or more LP evaluation. For small centers (<83 transplants/PSR), there were 4-fold additional LP evaluations (52 vs 13 PSRs) for 1-year mortality with Bayesian versus new-CMS criteria. For large centers (>183 transplants/PSR), there were 3-fold additional LP evaluations for 1-year mortality with Bayesian versus new-CMS criteria with median differences in observed and expected patient survival of -1.6% and -2.2%, respectively. A significant proportion of kidney transplant centers are identified as low performing with relatively small survival differences compared with expected. Bayesian criteria have significantly higher flagging rates and new-CMS criteria modestly reduce flagging. Critical appraisal of performance criteria is needed to assess whether quality oversight is meeting intended goals and whether further modifications could reduce risk aversion, more efficiently allocate resources, and increase transplant opportunities.

  15. Carboxymethyl starch mucoadhesive microspheres as gastroretentive dosage form.

    PubMed

    Lemieux, Marc; Gosselin, Patrick; Mateescu, Mircea Alexandru

    2015-12-30

    Carboxymethyl starch microspheres (CMS-MS) were produced from carboxymethyl starch powder (CMS-P) with a degree of substitution (DS) from 0.1 to 1.5 in order to investigate the influence of DS on physicochemical, drug release and mucoadhesion properties as well as interactions with gastrointestinal tract (GIT) epithelial barrier models. Placebo and furosemide loaded CMS-MS were obtained by emulsion-crosslinking with sodium trimetaphosphate (STMP). DS had an impact on increasing equilibrium water uptake and modulating drug release properties of the CMS-MS according to the surrounding pH. The transepithelial electrical resistance (TEER) of NCI-N87 gastric cell monolayers was not influenced in presence of CMS-MS, whereas that of Caco-2 intestinal cell monolayers decreased with increasing DS but recovered initial values at about 15h post-treatment. CMS-MS with increasing DS also enhanced furosemide permeability across both NCI-N87 and Caco-2 monolayers at pH gradients from 3.0 to 7.4. Mucoadhesion of CMS-MS on gastric mucosa (acidic condition) increased with the DS up to 55% for a DS of 1.0 but decreased on neutral intestinal mucosa to less than 10% with DS of 0.1. The drug release, permeability enhancement and mucoadhesive properties of the CMS-MS suggest CMS-MS with DS between 0.6 and 1.0 as suitable excipient for gastroretentive oral delivery dosage forms. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Professional Development to Differentiate Kindergarten Tier 1 Instruction: Can Already Effective Teachers Improve Student Outcomes by Differentiating Tier 1 Instruction?

    ERIC Educational Resources Information Center

    Al Otaiba, Stephanie; Folsom, Jessica S.; Wanzek, Jeanne; Greulich, Luana; Waesche, Jessica; Schatschneider, Christopher; Connor, Carol M.

    2016-01-01

    Two primary purposes guided this quasi-experimental within-teacher study: (a) to examine changes from baseline through 2 years of professional development (Individualizing Student Instruction) in kindergarten teachers' differentiation of Tier 1 literacy instruction; and (b) to examine changes in reading and vocabulary of 3 cohorts of the teachers'…

  17. 26 CFR 1.960-1 - Foreign tax credit with respect to taxes paid on earnings and profits of controlled foreign...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...-, second-, or third-tier corporation's earnings and profits. Section 1.960-2 prescribes rules for applying section 902 to dividends paid by a third-, second-, or first-tier corporation from earnings and profits...) Second-tier corporation. In the case of amounts included in the gross income of the taxpayer under...

  18. 12 CFR Appendix D to Part 225 - Capital Adequacy Guidelines for Bank Holding Companies: Tier 1 Leverage Measure

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Consolidated Financial Statements (FR Y-9C Report), less goodwill; amounts of mortgage servicing assets... part. b. The tier 1 leverage guidelines apply on a consolidated basis to any bank holding company with consolidated assets of $500 million or more. The tier 1 leverage guidelines also apply on a consolidated basis...

  19. 12 CFR 565.4 - Capital measures and capital category definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...-based capital ratio; (2) The Tier 1 risk-based capital ratio; and (3) The leverage ratio. (b) Capital...; and (ii) Has a Tier 1 risk-based capital ratio of 6.0 percent or greater; and (iii) Has a leverage... total risk-based capital ratio of 8.0 percent or greater; and (ii) Has a Tier 1 risk-based capital ratio...

  20. 12 CFR 325.4 - Inadequate capital as an unsafe or unsound practice or condition.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... is in compliance with a plan approved by the FDIC to increase its Tier 1 leverage capital ratio to... structure. (c) Unsafe or unsound condition. Any insured depository institution with a ratio of Tier 1... a ratio of Tier 1 capital to total assets of less than two percent which has entered into and is in...

  1. Single-tier testing with the C6 peptide ELISA kit compared with two-tier testing for Lyme disease.

    PubMed

    Wormser, Gary P; Schriefer, Martin; Aguero-Rosenfeld, Maria E; Levin, Andrew; Steere, Allen C; Nadelman, Robert B; Nowakowski, John; Marques, Adriana; Johnson, Barbara J B; Dumler, J Stephen

    2013-01-01

    For the diagnosis of Lyme disease, the 2-tier serologic testing protocol for Lyme disease has a number of shortcomings including low sensitivity in early disease; increased cost, time, and labor; and subjectivity in the interpretation of immunoblots. In this study, the diagnostic accuracy of a single-tier commercial C6 ELISA kit was compared with 2-tier testing. The results showed that the C6 ELISA was significantly more sensitive than 2-tier testing with sensitivities of 66.5% (95% confidence interval [CI] 61.7-71.1) and 35.2% (95% CI 30.6-40.1), respectively (P < 0.001) in 403 sera from patients with erythema migrans. The C6 ELISA had sensitivity statistically comparable to 2-tier testing in sera from Lyme disease patients with early neurologic manifestations (88.6% versus 77.3%, P = 0.13) or arthritis (98.3% versus 95.6%, P = 0.38). The specificities of C6 ELISA and 2-tier testing in over 2200 blood donors, patients with other conditions, and Lyme disease vaccine recipients were found to be 98.9% and 99.5%, respectively (P < 0.05, 95% CI surrounding the 0.6 percentage point difference of 0.04 to 1.15). In conclusion, using a reference standard of 2-tier testing, the C6 ELISA as a single-step serodiagnostic test provided increased sensitivity in early Lyme disease with comparable sensitivity in later manifestations of Lyme disease. The C6 ELISA had slightly decreased specificity. Future studies should evaluate the performance of the C6 ELISA compared with 2-tier testing in routine clinical practice. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Modeling individual exposures to ambient PM2.5 in the diabetes and the environment panel study (DEPS).

    PubMed

    Breen, Michael; Xu, Yadong; Schneider, Alexandra; Williams, Ronald; Devlin, Robert

    2018-06-01

    Air pollution epidemiology studies of ambient fine particulate matter (PM 2.5 ) often use outdoor concentrations as exposure surrogates, which can induce exposure error. The goal of this study was to improve ambient PM 2.5 exposure assessments for a repeated measurements study with 22 diabetic individuals in central North Carolina called the Diabetes and Environment Panel Study (DEPS) by applying the Exposure Model for Individuals (EMI), which predicts five tiers of individual-level exposure metrics for ambient PM 2.5 using outdoor concentrations, questionnaires, weather, and time-location information. Using EMI, we linked a mechanistic air exchange rate (AER) model to a mass-balance PM 2.5 infiltration model to predict residential AER (Tier 1), infiltration factors (F inf_home , Tier 2), indoor concentrations (C in , Tier 3), personal exposure factors (F pex , Tier 4), and personal exposures (E, Tier 5) for ambient PM 2.5 . We applied EMI to predict daily PM 2.5 exposure metrics (Tiers 1-5) for 174 participant-days across the 13 months of DEPS. Individual model predictions were compared to a subset of daily measurements of F pex and E (Tiers 4-5) from the DEPS participants. Model-predicted F pex and E corresponded well to daily measurements with a median difference of 14% and 23%; respectively. Daily model predictions for all 174 days showed considerable temporal and house-to-house variability of AER, F inf_home , and C in (Tiers 1-3), and person-to-person variability of F pex and E (Tiers 4-5). Our study demonstrates the capability of predicting individual-level ambient PM 2.5 exposure metrics for an epidemiological study, in support of improving risk estimation. Copyright © 2018. Published by Elsevier B.V.

  3. 12 CFR 3.63 - Disclosures by national banks or Federal savings associations described in § 3.61.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... tier 1 capital, tier 2 capital, tier 1 and total capital ratios, including the regulatory capital elements and all the regulatory adjustments and deductions needed to calculate the numerator of such ratios... to calculate total risk-weighted assets; (3) Regulatory capital ratios during any transition periods...

  4. Response to Intervention: Evaluation Report and Executive Summary

    ERIC Educational Resources Information Center

    Gorard, Stephen; Siddiqui, Nadia; See, Beng Huat

    2014-01-01

    Response to Intervention (RTI) is a targeted programme that uses a tiered approach to identify the needs of low achieving pupils. The approach begins with whole class teaching (Tier 1), followed by small group tuition (Tier 2) for those who need more attention, and one to one tutoring (Tier 3) for those who do not respond to the small group…

  5. 49 CFR 393.118 - What are the rules for securing dressed lumber or similar building products?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Shifting and Falling Cargo Specific Securement Requirements by Commodity Type § 393.118 What are the rules... transported using no more than one tier. Bundles carried on one tier must be secured in accordance with the... one tier. Bundles carried in more than one tier must be either: (1) Blocked against lateral movement...

  6. A Data-Driven Preschool PD Model for Literacy and Oral Language Instruction

    ERIC Educational Resources Information Center

    Abbott, Mary; Atwater, Jane; Lee, Younwoo; Edwards, Liesl

    2011-01-01

    The purpose of this article is to describe the professional development (PD) model for preschool literacy and language instruction that took place in a 3-year, 2-tiered Early Reading First project in 9 Head Start and community-based school classrooms. In our tiered model, the Tier 1 level was classroom instruction and Tier 2 was intervention…

  7. A Two-Tiered Model for Analyzing Library Web Site Usage Statistics, Part 1: Web Server Logs.

    ERIC Educational Resources Information Center

    Cohen, Laura B.

    2003-01-01

    Proposes a two-tiered model for analyzing web site usage statistics for academic libraries: one tier for library administrators that analyzes measures indicating library use, and a second tier for web site managers that analyzes measures aiding in server maintenance and site design. Discusses the technology of web site usage statistics, and…

  8. Origin of the CMS gene locus in rapeseed cybrid mitochondria: active and inactive recombination produces the complex CMS gene region in the mitochondrial genomes of Brassicaceae.

    PubMed

    Oshima, Masao; Kikuchi, Rie; Imamura, Jun; Handa, Hirokazu

    2010-01-01

    CMS (cytoplasmic male sterile) rapeseed is produced by asymmetrical somatic cell fusion between the Brassica napus cv. Westar and the Raphanus sativus Kosena CMS line (Kosena radish). The CMS rapeseed contains a CMS gene, orf125, which is derived from Kosena radish. Our sequence analyses revealed that the orf125 region in CMS rapeseed originated from recombination between the orf125/orfB region and the nad1C/ccmFN1 region by way of a 63 bp repeat. A precise sequence comparison among the related sequences in CMS rapeseed, Kosena radish and normal rapeseed showed that the orf125 region in CMS rapeseed consisted of the Kosena orf125/orfB region and the rapeseed nad1C/ccmFN1 region, even though Kosena radish had both the orf125/orfB region and the nad1C/ccmFN1 region in its mitochondrial genome. We also identified three tandem repeat sequences in the regions surrounding orf125, including a 63 bp repeat, which were involved in several recombination events. Interestingly, differences in the recombination activity for each repeat sequence were observed, even though these sequences were located adjacent to each other in the mitochondrial genome. We report results indicating that recombination events within the mitochondrial genomes are regulated at the level of specific repeat sequences depending on the cellular environment.

  9. 45 CFR 150.317 - Factors CMS uses to determine the amount of penalty.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Factors CMS uses to determine the amount of... RELATING TO HEALTH CARE ACCESS CMS ENFORCEMENT IN GROUP AND INDIVIDUAL INSURANCE MARKETS CMS Enforcement With Respect to Issuers and Non-Federal Governmental Plans-Civil Money Penalties § 150.317 Factors CMS...

  10. 78 FR 67420 - Self-Regulatory Organizations; EDGX Exchange, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-12

    ... decrease the rebate to add liquidity under the Market Depth Tier 1 from $0.0033 per share to $0.0032 per... Market Depth Tier 1 from $0.0033 per share to $0.0032 per share. Footnote 1 of the Fee Schedule currently provides that Members may qualify for the Market Depth Tier 1 and receive a rebate of $0.0033 per share for...

  11. A Five-Tier System for Improving the Categorization of Transplant Program Performance.

    PubMed

    Wey, Andrew; Salkowski, Nicholas; Kasiske, Bertram L; Israni, Ajay K; Snyder, Jon J

    2018-06-01

    To better inform health care consumers by better identifying differences in transplant program performance. Adult kidney transplants performed in the United States, January 1, 2012-June 30, 2014. In December 2016, the Scientific Registry of Transplant Recipients instituted a five-tier system for reporting transplant program performance. We compare the differentiation of program performance and the simulated misclassification rate of the five-tier system with the previous three-tier system based on the 95 percent credible interval. Scientific Registry of Transplant Recipients database. The five-tier system improved differentiation and maintained a low misclassification rate of less than 22 percent for programs differing by two tiers. The five-tier system will better inform health care consumers of transplant program performance. © Health Research and Educational Trust.

  12. On implementation of DCTCP on three-tier and fat-tree data center network topologies.

    PubMed

    Zafar, Saima; Bashir, Abeer; Chaudhry, Shafique Ahmad

    2016-01-01

    A data center is a facility for housing computational and storage systems interconnected through a communication network called data center network (DCN). Due to a tremendous growth in the computational power, storage capacity and the number of inter-connected servers, the DCN faces challenges concerning efficiency, reliability and scalability. Although transmission control protocol (TCP) is a time-tested transport protocol in the Internet, DCN challenges such as inadequate buffer space in switches and bandwidth limitations have prompted the researchers to propose techniques to improve TCP performance or design new transport protocols for DCN. Data center TCP (DCTCP) emerge as one of the most promising solutions in this domain which employs the explicit congestion notification feature of TCP to enhance the TCP congestion control algorithm. While DCTCP has been analyzed for two-tier tree-based DCN topology for traffic between servers in the same rack which is common in cloud applications, it remains oblivious to the traffic patterns common in university and private enterprise networks which traverse the complete network interconnect spanning upper tier layers. We also recognize that DCTCP performance cannot remain unaffected by the underlying DCN architecture hence there is a need to test and compare DCTCP performance when implemented over diverse DCN architectures. Some of the most notable DCN architectures are the legacy three-tier, fat-tree, BCube, DCell, VL2, and CamCube. In this research, we simulate the two switch-centric DCN architectures; the widely deployed legacy three-tier architecture and the promising fat-tree architecture using network simulator and analyze the performance of DCTCP in terms of throughput and delay for realistic traffic patterns. We also examine how DCTCP prevents incast and outcast congestion when realistic DCN traffic patterns are employed in above mentioned topologies. Our results show that the underlying DCN architecture significantly impacts DCTCP performance. We find that DCTCP gives optimal performance in fat-tree topology and is most suitable for large networks.

  13. Evolving the EPA Endocrine Disruptor Screening Program: The case for and against using high-throughput screening assays in EDSP Tier 1

    EPA Science Inventory

    Testing has begun as part of the EPA Endocrine Disruptor Screening Program (EDSP) Tier 1 battery of 11 in vitro and in vivo tests. A recognized issue with the EDSP is that the current Tier 1 screening battery is highly resource intensive in terms of cost, time and animal usage fo...

  14. 78 FR 76337 - Self-Regulatory Organizations; EDGX Exchange, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-17

    ... under the Market Depth Tier 1 from $0.0032 per share to $0.00325 per share and amend the criteria... Depth Tier 1 from $0.0032 per share to $0.00325 per share and amend the criteria necessary to achieve... Depth Tier 1 The Exchange proposes to amend its Fee Schedule to increase the rebate to add liquidity...

  15. R5 clade C SHIV strains with tier 1 or 2 neutralization sensitivity: tools to dissect env evolution and to develop AIDS vaccines in primate models.

    PubMed

    Siddappa, Nagadenahalli B; Watkins, Jennifer D; Wassermann, Klemens J; Song, Ruijiang; Wang, Wendy; Kramer, Victor G; Lakhashe, Samir; Santosuosso, Michael; Poznansky, Mark C; Novembre, Francis J; Villinger, François; Else, James G; Montefiori, David C; Rasmussen, Robert A; Ruprecht, Ruth M

    2010-07-21

    HIV-1 clade C (HIV-C) predominates worldwide, and anti-HIV-C vaccines are urgently needed. Neutralizing antibody (nAb) responses are considered important but have proved difficult to elicit. Although some current immunogens elicit antibodies that neutralize highly neutralization-sensitive (tier 1) HIV strains, most circulating HIVs exhibiting a less sensitive (tier 2) phenotype are not neutralized. Thus, both tier 1 and 2 viruses are needed for vaccine discovery in nonhuman primate models. We constructed a tier 1 simian-human immunodeficiency virus, SHIV-1157ipEL, by inserting an "early," recently transmitted HIV-C env into the SHIV-1157ipd3N4 backbone [1] encoding a "late" form of the same env, which had evolved in a SHIV-infected rhesus monkey (RM) with AIDS. SHIV-1157ipEL was rapidly passaged to yield SHIV-1157ipEL-p, which remained exclusively R5-tropic and had a tier 1 phenotype, in contrast to "late" SHIV-1157ipd3N4 (tier 2). After 5 weekly low-dose intrarectal exposures, SHIV-1157ipEL-p systemically infected 16 out of 17 RM with high peak viral RNA loads and depleted gut CD4+ T cells. SHIV-1157ipEL-p and SHIV-1157ipd3N4 env genes diverge mostly in V1/V2. Molecular modeling revealed a possible mechanism for the increased neutralization resistance of SHIV-1157ipd3N4 Env: V2 loops hindering access to the CD4 binding site, shown experimentally with nAb b12. Similar mutations have been linked to decreased neutralization sensitivity in HIV-C strains isolated from humans over time, indicating parallel HIV-C Env evolution in humans and RM. SHIV-1157ipEL-p, the first tier 1 R5 clade C SHIV, and SHIV-1157ipd3N4, its tier 2 counterpart, represent biologically relevant tools for anti-HIV-C vaccine development in primates.

  16. 78 FR 48170 - Privacy Act of 1974; CMS Computer Match No. 2013-12; HHS Computer Match No. 1307; SSA Computer...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-07

    ... Wesolowski, Director, Verifications Policy & Operations Branch, Division of Eligibility and Enrollment Policy..., electronic interfaces and an on-line system for the verification of eligibility. PURPOSE(S) OF THE MATCHING... Security number (SSN) verifications, (2) a death indicator, (3) an indicator of a finding of disability by...

  17. Recombination Events Involving the atp9 Gene Are Associated with Male Sterility of CMS PET2 in Sunflower.

    PubMed

    Reddemann, Antje; Horn, Renate

    2018-03-11

    Cytoplasmic male sterility (CMS) systems represent ideal mutants to study the role of mitochondria in pollen development. In sunflower, CMS PET2 also has the potential to become an alternative CMS source for commercial sunflower hybrid breeding. CMS PET2 originates from an interspecific cross of H. petiolaris and H. annuus as CMS PET1, but results in a different CMS mechanism. Southern analyses revealed differences for atp6 , atp9 and cob between CMS PET2, CMS PET1 and the male-fertile line HA89. A second identical copy of atp6 was present on an additional CMS PET2-specific fragment. In addition, the atp9 gene was duplicated. However, this duplication was followed by an insertion of 271 bp of unknown origin in the 5' coding region of the atp9 gene in CMS PET2, which led to the creation of two unique open reading frames orf288 and orf231 . The first 53 bp of orf288 are identical to the 5' end of atp9 . Orf231 consists apart from the first 3 bp, being part of the 271-bp-insertion, of the last 228 bp of atp9 . These CMS PET2-specific orfs are co-transcribed. All 11 editing sites of the atp9 gene present in orf231 are fully edited. The anther-specific reduction of the co-transcript in fertility-restored hybrids supports the involvement in male-sterility based on CMS PET2.

  18. Toward a Fault Tolerant Architecture for Vital Medical-Based Wearable Computing.

    PubMed

    Abdali-Mohammadi, Fardin; Bajalan, Vahid; Fathi, Abdolhossein

    2015-12-01

    Advancements in computers and electronic technologies have led to the emergence of a new generation of efficient small intelligent systems. The products of such technologies might include Smartphones and wearable devices, which have attracted the attention of medical applications. These products are used less in critical medical applications because of their resource constraint and failure sensitivity. This is due to the fact that without safety considerations, small-integrated hardware will endanger patients' lives. Therefore, proposing some principals is required to construct wearable systems in healthcare so that the existing concerns are dealt with. Accordingly, this paper proposes an architecture for constructing wearable systems in critical medical applications. The proposed architecture is a three-tier one, supporting data flow from body sensors to cloud. The tiers of this architecture include wearable computers, mobile computing, and mobile cloud computing. One of the features of this architecture is its high possible fault tolerance due to the nature of its components. Moreover, the required protocols are presented to coordinate the components of this architecture. Finally, the reliability of this architecture is assessed by simulating the architecture and its components, and other aspects of the proposed architecture are discussed.

  19. 26 CFR 1.902-3 - Credit for domestic corporate shareholder of a foreign corporation for foreign income taxes paid...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... the United States. (3) Second-tier corporation. (i) In the case of dividends paid to a first-tier... shareholder ending after that date, the foreign corporation is a “second-tier corporation” if at least 10... corporation by a foreign corporation before January 13, 1971, the foreign corporation is a “second-tier...

  20. 40 CFR 86.1861-04 - How do the Tier 2 and interim non-Tier 2 NOX averaging, banking and trading programs work?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 2 NOX averaging, banking and trading programs work? 86.1861-04 Section 86.1861-04 Protection of... work? (a) General provisions for Tier 2 credits and debits. (1) A manufacturer whose Tier 2 fleet... to a full useful life of 100,000 miles, provided that the credits are prorated by a multiplicative...

  1. A Dashboard for the Italian Computing in ALICE

    NASA Astrophysics Data System (ADS)

    Elia, D.; Vino, G.; Bagnasco, S.; Crescente, A.; Donvito, G.; Franco, A.; Lusso, S.; Mura, D.; Piano, S.; Platania, G.; ALICE Collaboration

    2017-10-01

    A dashboard devoted to the computing in the Italian sites for the ALICE experiment at the LHC has been deployed. A combination of different complementary monitoring tools is typically used in most of the Tier-2 sites: this makes somewhat difficult to figure out at a glance the status of the site and to compare information extracted from different sources for debugging purposes. To overcome these limitations a dedicated ALICE dashboard has been designed and implemented in each of the ALICE Tier-2 sites in Italy: in particular, it provides a single, interactive and easily customizable graphical interface where heterogeneous data are presented. The dashboard is based on two main ingredients: an open source time-series database and a dashboard builder tool for visualizing time-series metrics. Various sensors, able to collect data from the multiple data sources, have been also written. A first version of a national computing dashboard has been implemented using a specific instance of the builder to gather data from all the local databases.

  2. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new paradigm would represent a revolution in numerical modelling that could be of great benefit to the world.

  3. A New Generation of Networks and Computing Models for High Energy Physics in the LHC Era

    NASA Astrophysics Data System (ADS)

    Newman, H.

    2011-12-01

    Wide area networks of increasing end-to-end capacity and capability are vital for every phase of high energy physicists' work. Our bandwidth usage, and the typical capacity of the major national backbones and intercontinental links used by our field have progressed by a factor of several hundred times over the past decade. With the opening of the LHC era in 2009-10 and the prospects for discoveries in the upcoming LHC run, the outlook is for a continuation or an acceleration of these trends using next generation networks over the next few years. Responding to the need to rapidly distribute and access datasets of tens to hundreds of terabytes drawn from multi-petabyte data stores, high energy physicists working with network engineers and computer scientists are learning to use long range networks effectively on an increasing scale, and aggregate flows reaching the 100 Gbps range have been observed. The progress of the LHC, and the unprecedented ability of the experiments to produce results rapidly using worldwide distributed data processing and analysis has sparked major, emerging changes in the LHC Computing Models, which are moving from the classic hierarchical model designed a decade ago to more agile peer-to-peer-like models that make more effective use of the resources at Tier2 and Tier3 sites located throughout the world. A new requirements working group has gauged the needs of Tier2 centers, and charged the LHCOPN group that runs the network interconnecting the LHC Tierls with designing a new architecture interconnecting the Tier2s. As seen from the perspective of ICFA's Standing Committee on Inter-regional Connectivity (SCIC), the Digital Divide that separates physicists in several regions of the developing world from those in the developed world remains acute, although many countries have made major advances through the rapid installation of modern network infrastructures. A case in point is Africa, where a new round of undersea cables promises to transform the continent.

  4. 76 FR 68642 - Fisheries of the Northeastern United States; Atlantic Mackerel, Squid, and Butterfish Fisheries...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-07

    ...: Copies of supporting documents used by the Mid-Atlantic Fishery Management Council (Council), including..., Tier 1 and Tier 2 vessel owners are required to obtain a fish hold capacity measurement from a certified marine surveyor. The hold capacity measurement submitted at the time of application for a Tier 1...

  5. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Provisions § 89.204 Averaging. (a) Requirements for Tier 1 engines rated at or above 37 kW. A manufacturer... credits obtained through trading. (b) Requirements for Tier 2 and later engines rated at or above 37 kW and Tier 1 and later engines rated under 37 kW. A manufacturer may use averaging to offset an emission...

  6. Fast access to the CMS detector condition data employing HTML5 technologies

    NASA Astrophysics Data System (ADS)

    Pierro, Giuseppe Antonio; Cavallari, Francesca; Di Guida, Salvatore; Innocente, Vincenzo

    2011-12-01

    This paper focuses on using HTML version 5 (HTML5) for accessing condition data for the CMS experiment, evaluating the benefits and risks posed by the use of this technology. According to the authors of HTML5, this technology attempts to solve issues found in previous iterations of HTML and addresses the needs of web applications, an area previously not adequately covered by HTML. We demonstrate that employing HTML5 brings important benefits in terms of access performance to the CMS condition data. The combined use of web storage and web sockets allows increasing the performance and reducing the costs in term of computation power, memory usage and network bandwidth for client and server. Above all, the web workers allow creating different scripts that can be executed using multi-thread mode, exploiting multi-core microprocessors. Web workers have been employed in order to substantially decrease the web page rendering time to display the condition data stored in the CMS condition database.

  7. CMS Analysis and Data Reduction with Apache Spark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutsche, Oliver; Canali, Luca; Cremer, Illia

    Experimental Particle Physics has been at the forefront of analyzing the world's largest datasets for decades. The HEP community was among the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems for distributed data processing, collectively called "Big Data" technologies have emerged from industry and open source projects to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and tools, promising a fresh look at analysis ofmore » very large datasets that could potentially reduce the time-to-physics with increased interactivity. Moreover these new tools are typically actively developed by large communities, often profiting of industry resources, and under open source licensing. These factors result in a boost for adoption and maturity of the tools and for the communities supporting them, at the same time helping in reducing the cost of ownership for the end-users. In this talk, we are presenting studies of using Apache Spark for end user data analysis. We are studying the HEP analysis workflow separated into two thrusts: the reduction of centrally produced experiment datasets and the end-analysis up to the publication plot. Studying the first thrust, CMS is working together with CERN openlab and Intel on the CMS Big Data Reduction Facility. The goal is to reduce 1 PB of official CMS data to 1 TB of ntuple output for analysis. We are presenting the progress of this 2-year project with first results of scaling up Spark-based HEP analysis. Studying the second thrust, we are presenting studies on using Apache Spark for a CMS Dark Matter physics search, comparing Spark's feasibility, usability and performance to the ROOT-based analysis.« less

  8. 78 FR 49525 - Privacy Act of 1974; CMS Computer Match No. 2013-06; HHS Computer Match No. 1308

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-14

    ... Care Act of 2010 (Pub. L. 111-148), as amended by the Health Care and Education Reconciliation Act of..., 2009). INCLUSIVE DATES OF THE MATCH: The CMP will become effective no sooner than 40 days after the...

  9. 78 FR 49524 - Privacy Act of 1974; CMS Computer Match No. 2013-08; HHS Computer Match No. 1309

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-14

    ... by the Health Care and Education Reconciliation Act of 2010 (Pub. L. 111-152) (collectively, the ACA...). INCLUSIVE DATES OF THE MATCH: The CMP will become effective no sooner than 40 days after the report of the...

  10. 78 FR 50419 - Privacy Act of 1974; CMS Computer Match No. 2013-10; HHS Computer Match No. 1310

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-19

    ... (Pub. L. 111- 148), as amended by the Health Care and Education Reconciliation Act of 2010 (Pub. L. 111... Entitlements Program System of Records Notice, 77 FR 47415 (August 8, 2012). Inclusive Dates of the Match: The...

  11. Network monitoring in the Tier2 site in Prague

    NASA Astrophysics Data System (ADS)

    Eliáš, Marek; Fiala, Lukáš; Horký, Jiří; Chudoba, Jiří; Kouba, Tomáš; Kundrát, Jan; Švec, Jan

    2011-12-01

    Network monitoring provides different types of view on the network traffic. It's output enables computing centre staff to make qualified decisions about changes in the organization of computing centre network and to spot possible problems. In this paper we present network monitoring framework used at Tier-2 in Prague in Institute of Physics (FZU). The framework consists of standard software and custom tools. We discuss our system for hardware failures detection using syslog logging and Nagios active checks, bandwidth monitoring of physical links and analysis of NetFlow exports from Cisco routers. We present tool for automatic detection of network layout based on SNMP. This tool also records topology changes into SVN repository. Adapted weathermap4rrd is used to visualize recorded data to get fast overview showing current bandwidth usage of links in network.

  12. Purification of Cardiomyocytes from Differentiating Pluripotent Stem Cells using Molecular Beacons Targeting Cardiomyocyte-Specific mRNA

    PubMed Central

    Kim, Sangsung; Park, Hun-Jun; Byun, Jaemin; Cho, Kyu-Won; Saafir, Talib; Song, Ming-Ke; Yu, Shan Ping; Wagner, Mary; Bao, Gang; Yoon, Young-Sup

    2013-01-01

    Background While methods for generating cardiomyocytes (CMs) from pluripotent stem cells (PSCs) have been reported, current methods produce heterogeneous mixtures of CMs and non-CM cells. Here, we report an entirely novel system in which PSC-derived CMs are purified by CM-specific molecular beacons (MBs). MBs are nano-scale probes that emit a fluorescence signal when hybridized to target mRNAs. Method and Results Five MBs targeting mRNAs of either cardiac troponin T or myosin heavy chain 6/7 were generated. Among five MBs, a MB targeting myosin heavy chain 6/7 mRNA (MHC1-MB) identified up to 99% of HL-1 CMs, a mouse CM cell line, but < 3% of four non-CM cell types in flow cytometry analysis, indicating that MHC1-MB is specific for identifying CMs. We delivered MHC1-MB into cardiomyogenically differentiated PSCs through nucleofection. The detection rate of CMs was similar to the percentages of cardiac troponin T (TNNT2) or cardiac troponin I (TNNI3)-positive CMs, supporting the specificity of MBs. Finally, MHC1-MB-positive cells were FACS-sorted from mouse and human PSC differentiating cultures and ~97% cells expressed TNNT2- or TNNI3 determined by flow cytometry. These MB-based sorted cells maintained their CM characteristics verified by spontaneous beating, electrophysiologic studies, and expression of cardiac proteins. When transplanted in a myocardial infarction model, MB-based purified CMs improved cardiac function and demonstrated significant engraftment for 4 weeks without forming tumors. Conclusions We developed a novel CM selection system that allows production of highly purified CMs. These purified CMs and this system can be valuable for cell therapy and drug discovery. PMID:23995537

  13. Standard duplex criteria overestimate the degree of stenosis after eversion carotid endarterectomy.

    PubMed

    Benzing, Travis; Wilhoit, Cameron; Wright, Sharee; McCann, P Aaron; Lessner, Susan; Brothers, Thomas E

    2015-06-01

    The eversion technique for carotid endarterectomy (eCEA) offers an alternative to longitudinal arteriotomy and patch closure (pCEA) for open carotid revascularization. In some reports, eCEA has been associated with a higher rate of >50% restenosis of the internal carotid when it is defined as peak systolic velocity (PSV) >125 cm/s by duplex imaging. Because the conformation of the carotid bifurcation may differ after eCEA compared with native carotid arteries, it was hypothesized that standard duplex criteria might not accurately reflect the presence of restenosis after eCEA. In a case-control study, the outcomes of all patients undergoing carotid endarterectomy by one surgeon during the last 10 years were analyzed retrospectively, with a primary end point of PSV >125 cm/s. Duplex flow velocities were compared with luminal diameter measurements for any carotid computed tomography arteriography or magnetic resonance angiography study obtained within 2 months of duplex imaging, with the degree of stenosis calculated by the methodology used in the North American Symptomatic Carotid Endarterectomy Trial (NASCET) and the European Carotid Surgery Trial (ECST) as well as cross-sectional area (CSA) reduction. Simulations were generated and analyzed by computational model simulations of the eCEA and pCEA arteries. Eversion and longitudinal arteriotomy with patch techniques were used in 118 and 177 carotid arteries, respectively. Duplex follow-up was available in 90 eCEA arteries at a median of 16 (range, 2-136) months and in 150 pCEA arteries at a median of 41 (range, 3-115) months postoperatively. PSV >125 cm/s was present at some time during follow-up in 31% of eCEA and pCEA carotid arteries, each, and in the most recent duplex examination in 7% after eCEA and 21% after pCEA (P = .003), with no eCEA and two pCEA arteries occluding completely during follow-up (P = .29). In 19 carotid arteries with PSV >125 cm/s after angle correction (median, 160 cm/s; interquartile range, 146-432 cm/s) after eCEA that were subsequently examined by axial imaging, the mean percentage stenosis was 8% ± 11% by NASCET, 11% ± 5% by ECST, and 20% ± 9% by CSA criteria. For eight pCEA arteries with PSV >125 cm/s (median velocity, 148 cm/s; interquartile range, 139-242 cm/s), the corresponding NASCET, ECST, and CSA stenoses were 8% ± 35%, 26% ± 32%, and 25% ± 33%, respectively. NASCET internal carotid diameter reduction of at least 50% was noted by axial imaging after two of the eight pCEAs, and the PSV exceeded 200 cm/s in each case. The presence of hemodynamically significant carotid artery restenosis may be overestimated by standard duplex criteria after eCEA and perhaps after pCEA. Insufficient information currently exists to determine what PSV does correspond to hemodynamically significant restenosis. Published by Elsevier Inc.

  14. Cold-start and chemical characterization of emissions from mobile sources in Mexico.

    PubMed

    Schifter, I; Díaz, L; Rodríguez, R

    2010-10-01

    In this work tailpipe and evaporative emissions from a set of normal and high emitter vehicle models, year 2006-2008 (low mileage) certified when new to meet the Tier 1 emission standard, were characterized for criteria pollutants (carbon monoxide, nitrogen oxides and hydrocarbons), and a suite of unregulated emissions including aliphatic and aromatic aldehydes, monocyclic aromatic compounds, 1,3 butadiene, n-hexane and acrolein. Data were obtained under the three different driving conditions of the United States Federal Test Procedure, FTP-75 cycle. High emissions of both regulated and unregulated pollutants were observed in the cold-start phase of the driving cycle for low mileage Tier 1 normal and high emitters engines. Data were compared with results obtained for a set of MY > 1992-2005 that included vehicles with no catalytic converters, Tier 0 and MY 2000-5 Tier 1 emission standard with high mileage. The calculated average cold-start emissions for normal emitters in grams are 0.93, 8.21 and 1.06 for NMHC CO, and NOx, respectively for Tier 1 low mileage vehicles. The reductions in emissions for Tier 1 normal emitters are 76%, 56% and 56% for NMHC, CO and NOx, respectively, but 58%, 30% and 25% for the high emitters. Differences in emission can be ascribed to the mileage accumulation more than technological improvements. Cold-start emissions account in the USA roughly 10% of emissions from gasoline-powered vehicles. In Mexico the fractions are likely to be higher because one must account also for the contribution of Tier 0 and the running exhausts emissions of vehicles with no catalytic converters.

  15. Intelligent systems and advanced user interfaces for design, operation, and maintenance of command management systems

    NASA Technical Reports Server (NTRS)

    Potter, William J.; Mitchell, Christine M.

    1993-01-01

    Historically, command management systems (CMS) have been large and expensive spacecraft-specific software systems that were costly to build, operate, and maintain. Current and emerging hardware, software, and user interface technologies may offer an opportunity to facilitate the initial formulation and design of a spacecraft-specific CMS as well as to develop a more generic CMS system. New technologies, in addition to a core CMS common to a range of spacecraft, may facilitate the training and enhance the efficiency of CMS operations. Current mission operations center (MOC) hardware and software include Unix workstations, the C/C++ programming languages, and an X window interface. This configuration provides the power and flexibility to support sophisticated and intelligent user interfaces that exploit state-of-the-art technologies in human-machine interaction, artificial intelligence, and software engineering. One of the goals of this research is to explore the extent to which technologies developed in the research laboratory can be productively applied in a complex system such as spacecraft command management. Initial examination of some of these issues in CMS design and operation suggests that application of technologies such as intelligent planning, case-based reasoning, human-machine systems design and analysis tools (e.g., operator and designer models), and human-computer interaction tools (e.g., graphics, visualization, and animation) may provide significant savings in the design, operation, and maintenance of the CMS for a specific spacecraft as well as continuity for CMS design and development across spacecraft. The first six months of this research saw a broad investigation by Georgia Tech researchers into the function, design, and operation of current and planned command management systems at Goddard Space Flight Center. As the first step, the researchers attempted to understand the current and anticipated horizons of command management systems at Goddard. Preliminary results are given on CMS commonalities and causes of low re-use, and methods are proposed to facilitate increased re-use.

  16. Examining the Predictive Validity of a Dynamic Assessment of Decoding to Forecast Response to Tier 2 Intervention

    ERIC Educational Resources Information Center

    Cho, Eunsoo; Compton, Donald L.; Fuchs, Douglas; Fuchs, Lynn S.; Bouton, Bobette

    2014-01-01

    The purpose of this study was to examine the role of a dynamic assessment (DA) of decoding in predicting responsiveness to Tier 2 small-group tutoring in a response-to-intervention model. First grade students (n = 134) who did not show adequate progress in Tier 1 based on 6 weeks of progress monitoring received Tier 2 small-group tutoring in…

  17. Examining the Predictive Validity of a Dynamic Assessment of Decoding to Forecast Response Tier 2 to Intervention

    PubMed Central

    Cho, Eunsoo; Compton, Donald L.; Fuchs, Doug; Fuchs, Lynn S.; Bouton, Bobette

    2013-01-01

    The purpose of this study was to examine the role of a dynamic assessment (DA) of decoding in predicting responsiveness to Tier 2 small group tutoring in a response-to-intervention model. First-grade students (n=134) who did not show adequate progress in Tier 1 based on 6 weeks of progress monitoring received Tier 2 small-group tutoring in reading for 14 weeks. Student responsiveness to Tier 2 was assessed weekly with word identification fluency (WIF). A series of conditional individual growth curve analyses were completed that modeled the correlates of WIF growth (final level of performance and growth). Its purpose was to examine the predictive validity of DA in the presence of 3 sets of variables: static decoding measures, Tier 1 responsiveness indicators, and pre-reading variables (phonemic awareness, rapid letter naming, oral vocabulary, and IQ). DA was a significant predictor of final level and growth, uniquely explaining 3% – 13% of the variance in Tier 2 responsiveness depending on the competing predictors in the model and WIF outcome (final level of performance or growth). Although the additional variances explained uniquely by DA were relatively small, results indicate the potential of DA in identifying Tier 2 nonresponders. PMID:23213050

  18. Examining the predictive validity of a dynamic assessment of decoding to forecast response to tier 2 intervention.

    PubMed

    Cho, Eunsoo; Compton, Donald L; Fuchs, Douglas; Fuchs, Lynn S; Bouton, Bobette

    2014-01-01

    The purpose of this study was to examine the role of a dynamic assessment (DA) of decoding in predicting responsiveness to Tier 2 small-group tutoring in a response-to-intervention model. First grade students (n = 134) who did not show adequate progress in Tier 1 based on 6 weeks of progress monitoring received Tier 2 small-group tutoring in reading for 14 weeks. Student responsiveness to Tier 2 was assessed weekly with word identification fluency (WIF). A series of conditional individual growth curve analyses were completed that modeled the correlates of WIF growth (final level of performance and growth). Its purpose was to examine the predictive validity of DA in the presence of three sets of variables: static decoding measures, Tier 1 responsiveness indicators, and prereading variables (phonemic awareness, rapid letter naming, oral vocabulary, and IQ). DA was a significant predictor of final level and growth, uniquely explaining 3% to 13% of the variance in Tier 2 responsiveness depending on the competing predictors in the model and WIF outcome (final level of performance or growth). Although the additional variances explained uniquely by DA were relatively small, results indicate the potential of DA in identifying Tier 2 nonresponders. © Hammill Institute on Disabilities 2012.

  19. Ligand accessibility to the HIV-1 Env co-receptor binding site can occur prior to CD4 engagement and is independent of viral tier category.

    PubMed

    Boliar, Saikat; Patil, Shilpa; Shukla, Brihaspati N; Ghobbeh, Ali; Deshpande, Suprit; Chen, Weizao; Guenaga, Javier; Dimitrov, Dimiter S; Wyatt, Richard T; Chakrabarti, Bimal K

    2018-06-01

    HIV-1 virus entry into target cells requires the envelope glycoprotein (Env) to first bind the primary receptor, CD4 and subsequently the co-receptor. Antibody access to the co-receptor binding site (CoRbs) in the pre-receptor-engaged state, prior to cell attachment, remains poorly understood. Here, we have demonstrated that for tier-1 Envs, the CoRbs is directly accessible to full-length CD4-induced (CD4i) antibodies even before primary receptor engagement, indicating that on these Envs the CoRbs site is either preformed or can conformationally sample post-CD4-bound state. Tier-2 and tier-3 Envs, which are resistant to full-length CD4i antibody, are neutralized by m36.4, a lower molecular mass of CD4i-directed domain antibody. In some tier-2 and tier-3 Envs, CoRbs is accessible to m36.4 even prior to cellular attachment in an Env-specific manner independent of their tier category. These data suggest differential structural arrangements of CoRbs and varied masking of ligand access to the CoRbs in different Env isolates. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Is the chronic Tier-1 effect assessment approach for insecticides protective for aquatic ecosystems?

    PubMed

    Brock, Theo Cm; Bhatta, Ranjana; van Wijngaarden, René Pa; Rico, Andreu

    2016-10-01

    We investigated the appropriateness of several methods, including those recommended in the Aquatic Guidance Document of the European Food Safety Authority (EFSA), for the derivation of chronic Tier-1 regulatory acceptable concentrations (RACs) for insecticides and aquatic organisms. The insecticides represented different chemical classes (organophosphates, pyrethroids, benzoylureas, insect growth regulators, biopesticides, carbamates, neonicotinoids, and miscellaneous). Chronic Tier-1 RACs derived using toxicity data for the standard species Daphnia magna, Chironomus spp., and/or Americamysis bahia, were compared with Tier-3 RACs derived from micro- and mesocosm studies on basis of the ecological threshold option (ETO-RACs). ETO-RACs could be derived for 31 insecticides applied to micro- and mesocosms in single or multiple applications, yielding a total number of 36 cases for comparison. The chronic Tier-1 RACs calculated according to the EFSA approach resulted in a sufficient protection level, except for 1 neonicotinoid (slightly underprotective) and for several pyrethroids if toxicity data for A. bahia were not included. This latter observation can be explained by 1) the fact that A. bahia is the most sensitive standard test species for pyrethroids, 2) the hydrophobic properties of pyrethroids, and 3) the fact that long-term effects observed in (epi) benthic arthropods may be better explained by exposure via the sediment than via overlying water. Besides including toxicity data for A. bahia, the protection level for pyrethroids can be improved by selecting both D. magna and Chironomus spp. as standard test species for chronic Tier-1 derivation. Although protective in the majority of cases, the conservativeness of the recommended chronic Tier-1 RACs appears to be less than an order of magnitude for a relatively large proportion of insecticides when compared with their Tier-3 ETO-RACs. This may leave limited options for refinement of the chronic effect assessment using laboratory toxicity data for additional species. Integr Environ Assess Manag 2016;12:747-758. © 2015 SETAC. © 2015 SETAC.

  1. Impact of 3-tier formularies on drug treatment of attention-deficit/hyperactivity disorder in children.

    PubMed

    Huskamp, Haiden A; Deverka, Patricia A; Epstein, Arnold M; Epstein, Robert S; McGuigan, Kimberly A; Muriel, Anna C; Frank, Richard G

    2005-04-01

    Expenditures for medications used to treat attention-deficit/hyperactivity disorder (ADHD) in children have increased rapidly. Many employers and health plans have adopted 3-tier formularies in an attempt to control costs for these and other drugs. To assess the effect of copayment increases associated with 3-tier formulary adoption on use and spending patterns for ADHD medications for children. Observational study using quasi-experimental design to compare effects on ADHD medication use and spending for children enrolled as dependents in an employer-sponsored plan that made major changes to its pharmacy benefit design and a comparison group of children covered by the same insurer. The plan simultaneously moved from a 1-tier (same copayment required for all drugs) to a 3-tier formulary and implemented an across-the-board copayment increase. The plan later moved 3 drugs from tier 3 to tier 2. An intervention group of 20 326 and a comparison group of 15 776 children aged 18 years and younger. Monthly probability of using an ADHD medication; plan, enrollee, and total ADHD medication spending; and medication continuation. A 3-tier formulary implementation resulted in a 17% decrease in the monthly probability of using medication (P<.001), a 20% decrease in expected total medication expenditures, and a substantial shifting of costs from the plan to families (P<.001). Intervention group children using medications in the pre-period were more likely to change to a medication in a different tier after 3-tier adoption, relative to the comparison group (P = .08). The subsequent tier changes resulted in increased plan spending (P<.001) and decreased patient spending (P = .003) for users but no differences in continuation. The copayment increases associated with 3-tier formulary implementation by 1 employer resulted in lower total ADHD medication spending, sizeable increases in out-of-pocket expenditures for families of children with ADHD, and a significant decrease in the probability of using these medications.

  2. Advanced Broadband Links for TIER III UAV Data Communication

    NASA Astrophysics Data System (ADS)

    Griethe, Wolfgang; Gregory, Mark; Heine, Frank; Kampfner, Hartmut

    2011-08-01

    Unmanned Aeronautical Vehicle (UAV) are getting more and more importance because of their prominent role as national reconnaissance systems, for disaster monitoring, and environmental mapping. However, the existence of reliable and robust data links are indispensable for Unmanned Aircraft System (UAS) missions. In particular for Beyond Line-Of-Sight operations (BLOS) of Tier III UAVs, satellite data links are a key element since extensive sensor data have to be transmitted preferably in real-time or near real-time.The paper demonstrates that the continuously increasing number of UAS and the intensified use of high resolution sensors will reveal RF-bandwidth as a limitating factor in the communication chain of Tier III UAVs. The RF-bandwidth gap can be partly closed by use of high-order modulation, of course, but much more progress in terms of bandwidth allocation can be achieved by using optical transmission technology. Consequently, the paper underlines that meanwhile this technology has been sufficiently verified in space, and shows that optical links are suited as well for broadband communications of Tier III UAVs. Moreover, the advantages of LaserCom in UAV scenarios and its importance for Network Centric Warfare (NCW) as well as for Command, Control, Communications, Computers, Intelligens, Surveillance, and Reconnaissance (C4ISR) are emphasized. Numerous practical topics and design requirements, relevant for the establishment of optical links onboard of Tier III UAVs, are discussed.

  3. How MAP kinase modules function as robust, yet adaptable, circuits.

    PubMed

    Tian, Tianhai; Harding, Angus

    2014-01-01

    Genetic and biochemical studies have revealed that the diversity of cell types and developmental patterns evident within the animal kingdom is generated by a handful of conserved, core modules. Core biological modules must be robust, able to maintain functionality despite perturbations, and yet sufficiently adaptable for random mutations to generate phenotypic variation during evolution. Understanding how robust, adaptable modules have influenced the evolution of eukaryotes will inform both evolutionary and synthetic biology. One such system is the MAP kinase module, which consists of a 3-tiered kinase circuit configuration that has been evolutionarily conserved from yeast to man. MAP kinase signal transduction pathways are used across eukaryotic phyla to drive biological functions that are crucial for life. Here we ask the fundamental question, why do MAPK modules follow a conserved 3-tiered topology rather than some other number? Using computational simulations, we identify a fundamental 2-tiered circuit topology that can be readily reconfigured by feedback loops and scaffolds to generate diverse signal outputs. When this 2-kinase circuit is connected to proximal input kinases, a 3-tiered modular configuration is created that is both robust and adaptable, providing a biological circuit that can regulate multiple phenotypes and maintain functionality in an uncertain world. We propose that the 3-tiered signal transduction module has been conserved through positive selection, because it facilitated the generation of phenotypic variation during eukaryotic evolution.

  4. How MAP kinase modules function as robust, yet adaptable, circuits

    PubMed Central

    Tian, Tianhai; Harding, Angus

    2014-01-01

    Genetic and biochemical studies have revealed that the diversity of cell types and developmental patterns evident within the animal kingdom is generated by a handful of conserved, core modules. Core biological modules must be robust, able to maintain functionality despite perturbations, and yet sufficiently adaptable for random mutations to generate phenotypic variation during evolution. Understanding how robust, adaptable modules have influenced the evolution of eukaryotes will inform both evolutionary and synthetic biology. One such system is the MAP kinase module, which consists of a 3-tiered kinase circuit configuration that has been evolutionarily conserved from yeast to man. MAP kinase signal transduction pathways are used across eukaryotic phyla to drive biological functions that are crucial for life. Here we ask the fundamental question, why do MAPK modules follow a conserved 3-tiered topology rather than some other number? Using computational simulations, we identify a fundamental 2-tiered circuit topology that can be readily reconfigured by feedback loops and scaffolds to generate diverse signal outputs. When this 2-kinase circuit is connected to proximal input kinases, a 3-tiered modular configuration is created that is both robust and adaptable, providing a biological circuit that can regulate multiple phenotypes and maintain functionality in an uncertain world. We propose that the 3-tiered signal transduction module has been conserved through positive selection, because it facilitated the generation of phenotypic variation during eukaryotic evolution. PMID:25483189

  5. 77 FR 52344 - Information Collection Request Sent to the Office of Management and Budget (OMB) for Approval...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-29

    ... responses time per burden per nonhour response hours response burden cost Tier 1 (Desktop Analysis... developers of these small-scale projects do the desktop analysis described in Tier 1 or Tier 2 using publicly... published in the Federal Register (77 FR 19683) a notice of our intent to request that OMB renew approval...

  6. The Impact of Tier 1 Reading Instruction on Reading Outcomes for Students in Grades 4-12: A Meta-Analysis

    ERIC Educational Resources Information Center

    Swanson, Elizabeth; Stevens, Elizabeth A.; Scammacca, Nancy K.; Capin, Philip; Stewart, Alicia A.; Austin, Christy R.

    2017-01-01

    Understanding the efficacy of evidence-based reading practices delivered in the Tier 1 (i.e. general classroom) setting is critical to successful implementation of multi-tiered systems, meeting a diverse range of student learning needs, and providing high quality reading instruction across content areas. This meta-analysis presents evidence on the…

  7. Recombination Events Involving the atp9 Gene Are Associated with Male Sterility of CMS PET2 in Sunflower

    PubMed Central

    Reddemann, Antje; Horn, Renate

    2018-01-01

    Cytoplasmic male sterility (CMS) systems represent ideal mutants to study the role of mitochondria in pollen development. In sunflower, CMS PET2 also has the potential to become an alternative CMS source for commercial sunflower hybrid breeding. CMS PET2 originates from an interspecific cross of H. petiolaris and H. annuus as CMS PET1, but results in a different CMS mechanism. Southern analyses revealed differences for atp6, atp9 and cob between CMS PET2, CMS PET1 and the male-fertile line HA89. A second identical copy of atp6 was present on an additional CMS PET2-specific fragment. In addition, the atp9 gene was duplicated. However, this duplication was followed by an insertion of 271 bp of unknown origin in the 5′ coding region of the atp9 gene in CMS PET2, which led to the creation of two unique open reading frames orf288 and orf231. The first 53 bp of orf288 are identical to the 5′ end of atp9. Orf231 consists apart from the first 3 bp, being part of the 271-bp-insertion, of the last 228 bp of atp9. These CMS PET2-specific orfs are co-transcribed. All 11 editing sites of the atp9 gene present in orf231 are fully edited. The anther-specific reduction of the co-transcript in fertility-restored hybrids supports the involvement in male-sterility based on CMS PET2. PMID:29534485

  8. 12 CFR 509.103 - Civil money penalties.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...,500 12 U.S.C. 1467a(r)(3) Late/Inaccurate Reports—3rd Tier 1,375,000 12 U.S.C. 1817(j)(16)(A) Change in Control—1st Tier 7,500 12 U.S.C. 1817(j)(16)(B) Change in Control—2nd Tier 37,500 12 U.S.C. 1817(j.... 4012a(f) Flood Insurance 1 385 2 135,000 1 Per day. 2 Per year. [56 FR 38306, Aug. 12, 1991, as amended...

  9. EMERGING TECHNOLOGIES FOR THE MANAGEMENT AND UTILIZATION OF LANDFILL GAS

    EPA Science Inventory

    The report gives information on emerging technologies that are considered to be commercially available (Tier 1), currently undergoing research and development (Tier 2), or considered as potentially applicable (Tier 3) for the management of landfill gas (LFG) emissions or for the ...

  10. 6 CFR 27.220 - Tiering.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 6 Domestic Security 1 2014-01-01 2014-01-01 false Tiering. 27.220 Section 27.220 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY CHEMICAL FACILITY ANTI-TERRORISM STANDARDS Chemical... Risk-Based Tiering. Following review of a covered facility's Security Vulnerability Assessment, the...

  11. 6 CFR 27.220 - Tiering.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 6 Domestic Security 1 2013-01-01 2013-01-01 false Tiering. 27.220 Section 27.220 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY CHEMICAL FACILITY ANTI-TERRORISM STANDARDS Chemical... Risk-Based Tiering. Following review of a covered facility's Security Vulnerability Assessment, the...

  12. 6 CFR 27.220 - Tiering.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 6 Domestic Security 1 2011-01-01 2011-01-01 false Tiering. 27.220 Section 27.220 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY CHEMICAL FACILITY ANTI-TERRORISM STANDARDS Chemical... Risk-Based Tiering. Following review of a covered facility's Security Vulnerability Assessment, the...

  13. 6 CFR 27.220 - Tiering.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 6 Domestic Security 1 2012-01-01 2012-01-01 false Tiering. 27.220 Section 27.220 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY CHEMICAL FACILITY ANTI-TERRORISM STANDARDS Chemical... Risk-Based Tiering. Following review of a covered facility's Security Vulnerability Assessment, the...

  14. Telocytes and putative stem cells in ageing human heart

    PubMed Central

    Popescu, Laurentiu M; Curici, Antoanela; Wang, Enshi; Zhang, Hao; Hu, Shengshou; Gherghiceanu, Mihaela

    2015-01-01

    Tradition considers that mammalian heart consists of about 70% non-myocytes (interstitial cells) and 30% cardiomyocytes (CMs). Anyway, the presence of telocytes (TCs) has been overlooked, since they were described in 2010 (visit http://www.telocytes.com). Also, the number of cardiac stem cells (CSCs) has not accurately estimated in humans during ageing. We used electron microscopy to identify and estimate the number of cells in human atrial myocardium (appendages). Three age-related groups were studied: newborns (17 days–1 year), children (6–17 years) and adults (34–60 years). Morphometry was performed on low-magnification electron microscope images using computer-assisted technology. We found that interstitial area gradually increases with age from 31.3 ± 4.9% in newborns to 41 ± 5.2% in adults. Also, the number of blood capillaries (per mm2) increased with several hundreds in children and adults versus newborns. CMs are the most numerous cells, representing 76% in newborns, 88% in children and 86% in adults. Images of CMs mitoses were seen in the 17-day newborns. Interestingly, no lipofuscin granules were found in CMs of human newborns and children. The percentage of cells that occupy interstitium were (depending on age): endothelial cells 52–62%; vascular smooth muscle cells and pericytes 22–28%, Schwann cells with nerve endings 6–7%, fibroblasts 3–10%, macrophages 1–8%, TCs about 1% and stem cells less than 1%. We cannot confirm the popular belief that cardiac fibroblasts are the most prevalent cell type in the heart and account for about 20% of myocardial volume. Numerically, TCs represent a small fraction of human cardiac interstitial cells, but because of their extensive telopodes, they achieve a 3D network that, for instance, supports CSCs. The myocardial (very) low capability to regenerate may be explained by the number of CSCs, which decreases fivefold by age (from 0.5% to 0.1% in newborns versus adults). PMID:25545142

  15. Theoretical analysis of HVAC duct hanger systems

    NASA Technical Reports Server (NTRS)

    Miller, R. D.

    1987-01-01

    Several methods are presented which, together, may be used in the analysis of duct hanger systems over a wide range of frequencies. The finite element method (FEM) and component mode synthesis (CMS) method are used for low- to mid-frequency range computations and have been shown to yield reasonably close results. The statistical energy analysis (SEA) method yields predictions which agree with the CMS results for the 800 to 1000 Hz range provided that a sufficient number of modes participate. The CMS approach has been shown to yield valuable insight into the mid-frequency range of the analysis. It has been demonstrated that it is possible to conduct an analysis of a duct/hanger system in a cost-effective way for a wide frequency range, using several methods which overlap for several frequency bands.

  16. 78 FR 39730 - Privacy Act of 1974; CMS Computer Match No. 2013-11; HHS Computer Match No. 1302

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-02

    ... (Pub. L. 111-148), as amended by the Health Care and Education Reconciliation Act of 2010 (Pub. L. 111... 78 FR 32256 on May 29, 2013. Inclusive Dates of the Match: The CMP shall become effective no sooner...

  17. Framework for a clinical information system.

    PubMed

    Van De Velde, R; Lansiers, R; Antonissen, G

    2002-01-01

    The design and implementation of Clinical Information System architecture is presented. This architecture has been developed and implemented based on components following a strong underlying conceptual and technological model. Common Object Request Broker and n-tier technology featuring centralised and departmental clinical information systems as the back-end store for all clinical data are used. Servers located in the "middle" tier apply the clinical (business) model and application rules. The main characteristics are the focus on modelling and reuse of both data and business logic. Scalability as well as adaptability to constantly changing requirements via component driven computing are the main reasons for that approach.

  18. Biopharmaceutical classification of drugs using intrinsic dissolution rate (IDR) and rat intestinal permeability.

    PubMed

    Zakeri-Milani, Parvin; Barzegar-Jalali, Mohammad; Azimi, Mandana; Valizadeh, Hadi

    2009-09-01

    The solubility and dissolution rate of active ingredients are of major importance in preformulation studies of pharmaceutical dosage forms. In the present study, passively absorbed drugs are classified based on their intrinsic dissolution rate (IDR) and their intestinal permeabilities. IDR was determined by measuring the dissolution of a non-disintegrating disk of drug, and effective intestinal permeability of tested drugs in rat jejunum was determined using single perfusion technique. The obtained intrinsic dissolution rate values were in the range of 0.035-56.8 mg/min/cm(2) for tested drugs. The minimum and maximum intestinal permeabilities in rat intestine were determined to be 1.6 x 10(-5) and 2 x 10(-4)cm/s, respectively. Four classes of drugs were defined: Category I: P(eff,rat)>5 x 10(-5) (cm/s) or P(eff,human)>4.7 x 10(-5) (cm/s), IDR>1(mg/min/cm(2)), Category II: P(eff,rat)>5 x 10(-5) (cm/s) or P(eff,human)>4.7 x 10(-5) (cm/s), IDR<1(mg/min/cm(2)), Category III: P(eff,rat)<5 x 10(-5) (cm/s) or P(eff,human)<4.7 x 10(-5) (cm/s), IDR>1 (mg/min/cm(2)) and Category IV: P(eff,rat)<5 x 10(-5) (cm/s) or P(eff,human)<4.7 x 10(-5) (cm/s), IDR<1(mg/min/cm(2)). According to the results obtained and proposed classification of drugs, it is concluded that drugs could be categorized correctly based on their IDR and intestinal permeability values.

  19. 12 CFR 390.74 - Civil money penalties.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    .... 1467a(i)(3) Holding Company Act Violation 32,500 12 U.S.C. 1467a(r)(1) Late/Inaccurate Reports—1st Tier 2,200 12 U.S.C. 1467a(r)(2) Late/Inaccurate Reports—2nd Tier 32,500 12 U.S.C. 1467a(r)(3) Late...) Violation of Law or Unsafe or Unsound Practice—3rd Tier 1,375,000 12 U.S.C. 1820(k)(6)(A)(ii) Violation of...

  20. 12 CFR 109.103 - Civil money penalties.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    .... 1467a(r)(1) Late/Inaccurate Reports—1st Tier 2,200 12 U.S.C. 1467a(r)(2) Late/Inaccurate Reports—2nd Tier 32,500 12 U.S.C. 1467a(r)(3) Late/Inaccurate Reports—3rd Tier 1,375,000 12 U.S.C. 1817(j)(16)(A...,375,000 12 U.S.C. 1820(k)(6)(A)(ii) Violation of Post Employment Restrictions 275,000 12 U.S.C. 1884...

  1. 42 CFR 422.510 - Termination of contract by CMS.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...: (1) Termination of contract by CMS. (i) CMS notifies the MA organization in writing 90 days before... organization; or (B) The MA organization experiences financial difficulties so severe that its ability to make...) of this section. (ii) CMS notifies the MA organization in writing that its contract will be...

  2. 42 CFR 422.510 - Termination of contract by CMS.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...: (1) Termination of contract by CMS. (i) CMS notifies the MA organization in writing 90 days before... organization; or (B) The MA organization experiences financial difficulties so severe that its ability to make...) of this section. (ii) CMS notifies the MA organization in writing that its contract will be...

  3. 42 CFR 403.248 - Administrative review of CMS determinations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Administrative review of CMS determinations. 403... Certification Program: General Provisions § 403.248 Administrative review of CMS determinations. (a) This section provides for administrative review if CMS determines— (1) Not to certify a policy; or (2) That a...

  4. What Is the Evidence Base to Support Reading Interventions for Improving Student Outcomes in Grades 1-3? REL 2017-271

    ERIC Educational Resources Information Center

    Gersten, Russell; Newman-Gonchar, Rebecca; Haymond, Kelly S.; Dimino, Joseph

    2017-01-01

    Response to intervention (RTI) is a comprehensive early detection and prevention strategy used to identify and support struggling students before they fall behind. An RTI model usually has three tiers or levels of support. Tier 1 is generally defined as classroom instruction provided to all students, tier 2 is typically a preventive intervention…

  5. 34 CFR 75.224 - What are the procedures for using a multiple tier review process to evaluate applications?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false What are the procedures for using a multiple tier... applications received. (d) The Secretary may, in any tier— (1) Use more than one group of experts to gain... procedures for using a multiple tier review process to evaluate applications? (a) The Secretary may use a...

  6. Congenital myasthenic syndromes in Turkey: Clinical clues and prognosis with long term follow-up.

    PubMed

    Durmus, Hacer; Shen, Xin-Ming; Serdaroglu-Oflazer, Piraye; Kara, Bulent; Parman-Gulsen, Yesim; Ozdemir, Coskun; Brengman, Joan; Deymeer, Feza; Engel, Andrew G

    2018-04-01

    Congenital myasthenic syndromes (CMS) are a group of hereditary disorders affecting the neuromuscular junction. Here, we present clinical, electrophysiological and genetic findings of 69 patients from 51 unrelated kinships from Turkey. Genetic tests of 60 patients were performed at Mayo Clinic. Median follow-up time was 9.8 years (range 1-22 years). The most common CMS was primary acetylcholine receptor (AChR) deficiency (31/51) and the most common mutations in AChR were c.1219 + 2T > G (12/51) and c.1327delG (6/51) in CHRNE. Four of our 5 kinships with AChE deficiency carried p.W148X that truncates the collagen domain of COLQ, and was previously reported only in patients from Turkey. These were followed by GFPT1 deficiency (4/51), DOK7 deficiency (3/51), slow channel CMS (3/51), fast channel CMS (3/51), choline acetyltransferase deficiency (1/51) and a CMS associated with desmin deficiency (1/51). Distribution of muscle weakness was sometimes useful in giving a clue to the CMS subtype. Presence of repetitive compound muscle action potentials pointed to AChE deficiency or slow channel CMS. Our experience confirms that one needs to be cautious using pyridostigmine, since it can worsen some types of CMS. Ephedrine/salbutamol were very effective in AChE and DOK7 deficiencies and were useful as adjuncts in other types of CMS. Long follow-up gave us a chance to assess progression of the disease, and to witness 12 mainly uneventful pregnancies in 8 patients. In this study, we describe some new phenotypes and detail the clinical features of the well-known CMS. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Temperature Calculations in the Coastal Modeling System

    DTIC Science & Technology

    2017-04-01

    tide) and river discharge at model boundaries, wave radiation stress, and wind forcing over a model computational domain. Physical processes calculated...calculated in the CMS using the following meteorological parameters: solar radiation, cloud cover, air temperature, wind speed, and surface water temperature...during a clear (i.e., cloudless) sky (Wm-2); CLDC is the cloud cover fraction (0-1.0); SWR is the surface reflection coefficient; and SHDf is the

  8. Global transcriptomic analysis of induced cardiomyocytes predicts novel regulators for direct cardiac reprogramming.

    PubMed

    Talkhabi, Mahmood; Razavi, Seyed Morteza; Salari, Ali

    2017-06-01

    Heart diseases are the most significant cause of morbidity and mortality in the world. De novo generated cardiomyocytes (CMs) are a great cellular source for cell-based therapy and other potential applications. Direct cardiac reprogramming is the newest method to produce CMs, known as induced cardiomyocytes (iCMs). During a direct cardiac reprogramming, also known as transdifferentiation, non-cardiac differentiated adult cells are reprogrammed to cardiac identity by forced expression of cardiac-specific transcription factors (TFs) or microRNAs. To this end, many different combinations of TFs (±microRNAs) have been reported for direct reprogramming of mouse or human fibroblasts to iCMs, although their efficiencies remain very low. It seems that the investigated TFs and microRNAs are not sufficient for efficient direct cardiac reprogramming and other cardiac specific factors may be required for increasing iCM production efficiency, as well as the quality of iCMs. Here, we analyzed gene expression data of cardiac fibroblast (CFs), iCMs and adult cardiomyocytes (aCMs). The up-regulated and down-regulated genes in CMs (aCMs and iCMs) were determined as CM and CF specific genes, respectively. Among CM specific genes, we found 153 transcriptional activators including some cardiac and non-cardiac TFs that potentially activate the expression of CM specific genes. We also identified that 85 protein kinases such as protein kinase D1 (PKD1), protein kinase A (PRKA), calcium/calmodulin-dependent protein kinase (CAMK), protein kinase C (PRKC), and insulin like growth factor 1 receptor (IGF1R) that are strongly involved in establishing CM identity. CM gene regulatory network constructed using protein kinases, transcriptional activators and intermediate proteins predicted some new transcriptional activators such as myocyte enhancer factor 2A (MEF2A) and peroxisome proliferator-activated receptor gamma coactivator 1 alpha (PPARGC1A), which may be required for qualitatively and quantitatively efficient direct cardiac reprogramming. Taken together, this study provides new insights into the complexity of cell fate conversion and better understanding of the roles of transcriptional activators, signaling pathways and protein kinases in increasing the efficiency of direct cardiac reprogramming and maturity of iCMs.

  9. Stability of colistin methanesulfonate in pharmaceutical products and solutions for administration to patients.

    PubMed

    Wallace, Stephanie J; Li, Jian; Rayner, Craig R; Coulthard, Kingsley; Nation, Roger L

    2008-09-01

    Colistin methanesulfonate (CMS) has the potential to hydrolyze in aqueous solution to liberate colistin, its microbiologically active and more toxic parent compound. While conversion of CMS to colistin in vivo is important for bactericidal activity, liberation of colistin during storage and/or use of pharmaceutical formulations may potentiate the toxicity of CMS. To date, there has been no information available regarding the stability of CMS in pharmaceutical preparations. Two commercial CMS formulations were investigated for stability with respect to colistin content, which was measured by a specific high-performance liquid chromatography method. Coly-Mycin M Parenteral (colistimethate lyophilized powder) was stable (<0.1% of CMS present as colistin) for at least 20 weeks at 4 degrees C and 25 degrees C at 60% relative humidity. When Coly-Mycin M was reconstituted with 2 ml of water to a CMS concentration of 200 mg/ml for injection, Coly-Mycin M was stable (<0.1% colistin formed) for at least 7 days at both 4 degrees C and 25 degrees C. When further diluted to 4 mg/ml in a glucose (5%) or saline (0.9%) infusion solution as directed, CMS hydrolyzed faster at 25 degrees C (<4% colistin formed after 48 h) than at 4 degrees C (0.3% colistin formed). The second formulation, CMS Solution for Inhalation (77.5 mg/ml), was stable at 4 degrees C and 25 degrees C for at least 12 months, as determined based on colistin content (<0.1%). This study demonstrated the concentration- and temperature-dependent hydrolysis of CMS. The information provided by this study has important implications for the formulation and clinical use of CMS products.

  10. Substantial Targeting Advantage Achieved by Pulmonary Administration of Colistin Methanesulfonate in a Large-Animal Model

    PubMed Central

    Nguyen, Tri-Hung; Lieu, Linh Thuy; Nguyen, Gary; Bischof, Robert J.; Meeusen, Els N.; Li, Jian; Nation, Roger L.

    2016-01-01

    ABSTRACT Colistin, administered as its inactive prodrug colistin methanesulfonate (CMS), is often used in multidrug-resistant Gram-negative pulmonary infections. The CMS and colistin pharmacokinetics in plasma and epithelial lining fluid (ELF) following intravenous and pulmonary dosing have not been evaluated in a large-animal model with pulmonary architecture similar to that of humans. Six merino sheep (34 to 43 kg body weight) received an intravenous or pulmonary dose of 4 to 8 mg/kg CMS (sodium) or 2 to 3 mg/kg colistin (sulfate) in a 4-way crossover study. Pulmonary dosing was achieved via jet nebulization through an endotracheal tube cuff. CMS and colistin were quantified in plasma and bronchoalveolar lavage fluid (BALF) samples by high-performance liquid chromatography (HPLC). ELF concentrations were calculated via the urea method. CMS and colistin were comodeled in S-ADAPT. Following intravenous CMS or colistin administration, no concentrations were quantifiable in BALF samples. Elimination clearance was 1.97 liters/h (4% interindividual variability) for CMS (other than conversion to colistin) and 1.08 liters/h (25%) for colistin. On average, 18% of a CMS dose was converted to colistin. Following pulmonary delivery, colistin was not quantifiable in plasma and CMS was detected in only one sheep. Average ELF concentrations (standard deviations [SD]) of formed colistin were 400 (243), 384 (187), and 184 (190) mg/liter at 1, 4, and 24 h after pulmonary CMS administration. The population pharmacokinetic model described well CMS and colistin in plasma and ELF following intravenous and pulmonary administration. Pulmonary dosing provided high ELF and low plasma colistin concentrations, representing a substantial targeting advantage over intravenous administration. Predictions from the pharmacokinetic model indicate that sheep are an advantageous model for translational research. PMID:27821445

  11. Stability of Colistin Methanesulfonate in Pharmaceutical Products and Solutions for Administration to Patients▿

    PubMed Central

    Wallace, Stephanie J.; Li, Jian; Rayner, Craig. R.; Coulthard, Kingsley; Nation, Roger L.

    2008-01-01

    Colistin methanesulfonate (CMS) has the potential to hydrolyze in aqueous solution to liberate colistin, its microbiologically active and more toxic parent compound. While conversion of CMS to colistin in vivo is important for bactericidal activity, liberation of colistin during storage and/or use of pharmaceutical formulations may potentiate the toxicity of CMS. To date, there has been no information available regarding the stability of CMS in pharmaceutical preparations. Two commercial CMS formulations were investigated for stability with respect to colistin content, which was measured by a specific high-performance liquid chromatography method. Coly-Mycin M Parenteral (colistimethate lyophilized powder) was stable (<0.1% of CMS present as colistin) for at least 20 weeks at 4°C and 25°C at 60% relative humidity. When Coly-Mycin M was reconstituted with 2 ml of water to a CMS concentration of 200 mg/ml for injection, Coly-Mycin M was stable (<0.1% colistin formed) for at least 7 days at both 4°C and 25°C. When further diluted to 4 mg/ml in a glucose (5%) or saline (0.9%) infusion solution as directed, CMS hydrolyzed faster at 25°C (<4% colistin formed after 48 h) than at 4°C (0.3% colistin formed). The second formulation, CMS Solution for Inhalation (77.5 mg/ml), was stable at 4°C and 25°C for at least 12 months, as determined based on colistin content (<0.1%). This study demonstrated the concentration- and temperature-dependent hydrolysis of CMS. The information provided by this study has important implications for the formulation and clinical use of CMS products. PMID:18606838

  12. Molecular constituents of the extracellular matrix in rat liver mounting a hepatic progenitor cell response for tissue repair

    PubMed Central

    2013-01-01

    Background Tissue repair in the adult mammalian liver occurs in two distinct processes, referred to as the first and second tiers of defense. We undertook to characterize the changes in molecular constituents of the extracellular matrix when hepatic progenitor cells (HPCs) respond in a second tier of defense to liver injury. Results We used transcriptional profiling on rat livers responding by a first tier (surgical removal of 70% of the liver mass (PHx protocol)) and a second tier (70% hepatectomy combined with exposure to 2-acetylaminofluorene (AAF/PHx protocol)) of defense to liver injury and compared the transcriptional signatures in untreated rat liver (control) with those from livers of day 1, day 5 and day 9 post hepatectomy in both protocols. Numerous transcripts encoding specific subunits of collagens, laminins, integrins, and various other extracellular matrix structural components were differentially up- or down-modulated (P < 0.01). The levels of a number of transcripts were significantly up-modulated, mainly in the second tier of defense (Agrn, Bgn, Fbn1, Col4a1, Col8a1, Col9a3, Lama5, Lamb1, Lamb2, Itga4, Igtb2, Itgb4, Itgb6, Nid2), and their signal intensities showed a strong or very strong correlation with Krt1-19, a well-established marker of a ductular/HPC reaction. Furthermore, a significant up-modulation and very strong correlation between the transcriptional profiles of Krt1-19 and St14 encoding matriptase, a component of a novel protease system, was found in the second tier of defense. Real-time PCR confirmed the modulation of St14 transcript levels and strong correlation to Krt-19 and also showed a significant up-modulation and strong correlation to Spint1 encoding HAI-1, a cognate inhibitor of matriptase. Immunodetection and three-dimensional reconstructions showed that laminin, Collagen1a1, agrin and nidogen1 surrounded bile ducts, proliferating cholangiocytes, and HPCs in ductular reactions regardless of the nature of defense. Similarly, matriptase and HAI-1 were expressed in cholangiocytes regardless of the tier of defense, but in the second tier of defense, a subpopulation of HPCs in ductular reactions co-expressed HAI-1 and the fetal hepatocyte marker Dlk1. Conclusion Transcriptional profiling and immunodetection, including three-dimensional reconstruction, generated a detailed overview of the extracellular matrix constituents expressed in a second tier of defense to liver injury. PMID:24359594

  13. From Early Embryonic to Adult Stage: Comparative Study of Action Potentials of Native and Pluripotent Stem Cell-Derived Cardiomyocytes.

    PubMed

    Peinkofer, Gabriel; Burkert, Karsten; Urban, Katja; Krausgrill, Benjamin; Hescheler, Jürgen; Saric, Tomo; Halbach, Marcel

    2016-10-01

    Cardiomyocytes (CMs) derived from induced pluripotent stem cells (iPS-CMs) are promising candidates for cell therapy, drug screening, and developmental studies. It is known that iPS-CMs possess immature electrophysiological properties, but an exact characterization of their developmental stage and subtype differentiation is hampered by a lack of knowledge of electrophysiological properties of native CMs from different developmental stages and origins within the heart. Thus, we sought to systematically investigate action potential (AP) properties of native murine CMs and to establish a database that allows classification of stem cell-derived CMs. Hearts from 129S2PasCrl mice were harvested at days 9-10, 12-14, and 16-18 postcoitum, as well as 1 day, 3-4 days, 1-2 weeks, 3-4 weeks, and 6 weeks postpartum. AP recordings in left and right atria and at apical, medial, and basal left and right ventricles were performed with sharp glass microelectrodes. Measurements revealed significant changes in AP morphology during pre- and postnatal murine development and significant differences between atria and ventricles, enabling a classification of developmental stage and subtype differentiation of stem cell-derived CMs based on their AP properties. For iPS-CMs derived from cell line TiB7.4, a typical ventricular phenotype was demonstrated at later developmental stages, while there were electrophysiological differences from atrial as well as ventricular native CMs at earlier stages. This finding supports that iPS-CMs can develop AP properties similar to native CMs, but points to differences in the maturation process between iPS-CMs and native CMs, which may be explained by dissimilar conditions during in vitro differentiation and in vivo development.

  14. 78 FR 38074 - Announcement Regarding a Change in Eligibility for Unemployment Insurance (UI) Claimants in...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-25

    ...Announcement regarding a change in eligibility for Unemployment Insurance (UI) claimants in Alabama, Alaska, Delaware, Illinois, Louisiana, Michigan, Mississippi, Ohio, the Virgin Islands and Wisconsin in the Emergency Unemployment Compensation (EUC08) program, and the Federal-State Extended Benefits (EB) program. The U.S. Department of Labor (Department) produces trigger notices indicating which states qualify for both EB and EUC08 benefits, and provides the beginning and ending dates of payable periods for each qualifying state. The trigger notices covering state eligibility for these programs can be found at: http://ows.doleta.gov/unemploy/claims-- arch.asp. The following changes have occurred since the publication of the last notice regarding states EUC08 and EB trigger status: Alabama's trigger value had fallen below the 7.0% threshold and has triggered ``off'' Tier 3 of EUC08. Based on data released by the Bureau of Labor Statistics on March 18, 2013, the three month average, seasonally adjusted total unemployment rate (TUR) in Alabama was 6.9%, falling below the 7.0% trigger threshold necessary to remain ``on'' Tier 3 of EUC08. The week ending April 13, 2013, was the last week in which EUC08 claimants in Alabama could exhaust Tier 2 and establish Tier 3 eligibility. Under the phase-out provisions, claimants could receive any remaining entitlement they had for Tier 3 after April 13, 2013. Alaska's insured unemployment rate (IUR) has fallen below the 6.0% trigger threshold and has triggered ``off'' of EB. Based on data from Alaska for the week ending April 13, 2013, the 13 week IUR in Alaska fell below the 6.0% trigger threshold necessary to remain ``on'' EB. The payable period in EB for Alaska ended May 4, 2013. Alaska's IUR has fallen below the 6.0% trigger threshold and has triggered ``off'' Tier 4 of EUC08. Based on data from Alaska for the week ending April 13, 2013, the 13 week IUR in Alaska fell below the 6.0% trigger rate threshold to remain ``on'' Tier 4 of EUC08. The week ending May 4, 2013, was the last week in which EUC08 claimants in Alaska could exhaust Tier 3, and establish Tier 4 eligibility. Under the phase-out provisions, claimants could receive any remaining entitlement they had for Tier 4 after May 4, 2013. Delaware's trigger value exceeds the 7.0% trigger threshold and has triggered ``on'' Tier 3 of EUC08. Based on data released by the Bureau of Labor Statistics on March 18, 2013, the three month average, seasonally adjusted TUR in Delaware was 7.1%, exceeding the 7.0% threshold necessary to trigger ``on'' Tier 3 of EUC08. The week beginning April 7, 2013, was the first week in which EUC08 claimants in Delaware who had exhausted Tier 2, and are otherwise eligible, could establish Tier 3 eligibility. Illinois' trigger value met the 9.0% trigger threshold and has triggered ``on'' Tier 4 of EUC08. Based on data released by the Bureau of Labor Statistics on March 29, 2013, the three month average, seasonally adjusted TUR in Illinois met the 9.0% trigger threshold to trigger ``on'' Tier 4 of EUC08. The week beginning April 14, 2013, was the first week in which EUC08 claimants in Illinois who had exhausted Tier 3, and were otherwise eligible, could establish Tier 4 eligibility. Louisiana's trigger value has fallen below the 6.0% trigger threshold and has triggered ``off'' Tier 2 of EUC08. Based on data released by the Bureau of Labor Statistics on March 18, 2013, the three month average, seasonally adjusted TUR in Louisiana was 5.8%, falling below the 6.0% trigger threshold to remain ``on'' Tier 2 of EUC08. The week ending April 13, 2013, was the last week in which EUC08 claimants in Louisiana could exhaust Tier 1, and establish Tier 2 eligibility. Under the phase-out provisions, claimants could receive any remaining entitlement they had in Tier 2 after April 13, 2013. Michigan's trigger value has fallen below the 9.0% trigger threshold and has triggered ``off'' Tier 4 of EUC08. Based on data released by the Bureau of Labor Statistics on March 18, 2013, the three month average, seasonally adjusted TUR for Michigan was 8.9%, falling below the 9.0% trigger threshold to remain ``on'' Tier 4 of EUC08. The week ending April 13, 2013, was the last week in which EUC08 claimants in Michigan could exhaust Tier 3, and establish Tier 4 eligibility. Under the phase-out provisions, claimants could receive any remaining entitlement they had in Tier 4 after April 13, 2013. Mississippi's trigger value exceeds the 9.0% trigger threshold and has triggered ``on'' Tier 4 of EUC08. Based on data released by the Bureau of Labor Statistics on March 29, 2013, the three month average, seasonally adjusted TUR in Mississippi was 9.3%, exceeding the 9.0% trigger threshold to trigger ``on'' Tier 4 of EUC08. The week beginning April 14, 2013, was the first week in which EUC08 claimants in Mississippi who had exhausted Tier 3, and are otherwise eligible, could establish Tier 4 eligibility. Ohio's trigger value met the 7.0% trigger threshold and has triggered ``on'' Tier 3 of EUC08. Based on data released by the Bureau of Labor Statistics on April 19, 2013, the three month average, seasonally adjusted total unemployment rate in Ohio had met 7.0% trigger threshold to trigger ``on'' in Tier 3 of EUC08. The week beginning May 5, 2013, was the first week in which EUC08 claimants in Ohio who had exhausted Tier 2, and were otherwise eligible, could establish Tier 3 eligibility. The Virgin Islands' estimated trigger rate fell below the 6.0% threshold and has triggered ``off'' both Tier 2 and Tier 3 of EUC08. Based on data released by the Bureau of Labor Statistics on March 8, 2013, the estimated three month average, seasonally adjusted TUR in the Virgin Islands fell below the 6.0% trigger threshold rate to remain ``on'' both Tier 2 and Tier 3 of EUC08. That triggered the Virgin Islands off both Tier 2 and Tier 3 of EUC08. The week ending March, 30 2013, was the last week in which EUC08 claimants in the Virgin Islands could exhaust Tier 1 and establish Tier 2 eligibility, or exhaust Tier 2 and establish Tier 3 eligibility. Wisconsin's trigger value met the 7.0% threshold and has triggered ``on'' Tier 3 of EUC08, however mandatory 13 week ``off'' period delayed effective date. Based on data released by the Bureau of Labor Statistics on April 19, 2013, the three month average, seasonally adjusted TUR for Wisconsin has met the 7.0% trigger rate threshold to trigger ``on'' Tier 3 of EUC08. However, Wisconsin was in a 13 week mandatory ``off'' period that started February 9, 2013, and did not conclude until May 11, 2013. As a result, Wisconsin remained in an ``off'' period for Tier 3 of EUC08 through May 11, 2013, and triggered ``on'' Tier 3 of EUC08 effective May 12, 2013. The week beginning May 12, 2013, was the first week in which EUC08 claimants in Wisconsin who have exhausted Tier 2, and are otherwise eligible, can establish Tier 3 eligibility.

  15. Effect of electronic prescribing with formulary decision support on medication tier, copayments, and adherence

    PubMed Central

    2014-01-01

    Background Medication non-adherence is prevalent. We assessed the effect of electronic prescribing (e-prescribing) with formulary decision support on preferred formulary tier usage, copayment, and concomitant adherence. Methods We retrospectively analyzed 14,682 initial pharmaceutical claims for angiotensin receptor blocker and inhaled steroid medications among 14,410 patients of 2189 primary care physicians (PCPs) who were offered e-prescribing with formulary decision support, including 297 PCPs who adopted it. Formulary decision support was initially non-interruptive, such that formulary tier symbols were displayed adjacent to medication names. Subsequently, interruptive formulary decision support alerts also interrupted e-prescribing when preferred-tier alternatives were available. A difference in differences design was used to compare the pre-post differences in medication tier for each new prescription attributed to non-adopters, low user (<30% usage rate), and high user PCPs (>30% usage rate). Second, we modeled the effect of formulary tier on prescription copayment. Last, we modeled the effect of copayment on adherence (proportion of days covered) to each new medication. Results Compared with non-adopters, high users of e-prescribing were more likely to prescribe preferred-tier medications (vs. non-preferred tier) when both non-interruptive and interruptive formulary decision support were in place (OR 1.9 [95% CI 1.0-3.4], p = 0.04), but no more likely to prescribe preferred-tier when only non-interruptive formulary decision support was in place (p = 0.90). Preferred-tier claims had only slightly lower mean monthly copayments than non-preferred tier claims (angiotensin receptor blocker: $10.60 versus $11.81, inhaled steroid: $14.86 versus $16.42, p < 0.0001). Medication possession ratio was 8% lower for each $1.00 increase in monthly copayment to the one quarter power (p < 0.0001). However, we detected no significant direct association between formulary decision support usage and adherence. Conclusion Interruptive formulary decision support shifted prescribing toward preferred tiers, but these medications were only minimally less expensive in the studied patient population. In this context, formulary decision support did not significantly increase adherence. To impact cost-related non-adherence, formulary decision support will likely need to be paired with complementary drug benefit design. Formulary decision support should be studied further, with particular attention to its effect on adherence in the setting of different benefit designs. PMID:25167807

  16. Effect of electronic prescribing with formulary decision support on medication tier, copayments, and adherence.

    PubMed

    Pevnick, Joshua M; Li, Ning; Asch, Steven M; Jackevicius, Cynthia A; Bell, Douglas S

    2014-08-28

    Medication non-adherence is prevalent. We assessed the effect of electronic prescribing (e-prescribing) with formulary decision support on preferred formulary tier usage, copayment, and concomitant adherence. We retrospectively analyzed 14,682 initial pharmaceutical claims for angiotensin receptor blocker and inhaled steroid medications among 14,410 patients of 2189 primary care physicians (PCPs) who were offered e-prescribing with formulary decision support, including 297 PCPs who adopted it. Formulary decision support was initially non-interruptive, such that formulary tier symbols were displayed adjacent to medication names. Subsequently, interruptive formulary decision support alerts also interrupted e-prescribing when preferred-tier alternatives were available. A difference in differences design was used to compare the pre-post differences in medication tier for each new prescription attributed to non-adopters, low user (<30% usage rate), and high user PCPs (>30% usage rate). Second, we modeled the effect of formulary tier on prescription copayment. Last, we modeled the effect of copayment on adherence (proportion of days covered) to each new medication. Compared with non-adopters, high users of e-prescribing were more likely to prescribe preferred-tier medications (vs. non-preferred tier) when both non-interruptive and interruptive formulary decision support were in place (OR 1.9 [95% CI 1.0-3.4], p = 0.04), but no more likely to prescribe preferred-tier when only non-interruptive formulary decision support was in place (p = 0.90). Preferred-tier claims had only slightly lower mean monthly copayments than non-preferred tier claims (angiotensin receptor blocker: $10.60 versus $11.81, inhaled steroid: $14.86 versus $16.42, p < 0.0001). Medication possession ratio was 8% lower for each $1.00 increase in monthly copayment to the one quarter power (p < 0.0001). However, we detected no significant direct association between formulary decision support usage and adherence. Interruptive formulary decision support shifted prescribing toward preferred tiers, but these medications were only minimally less expensive in the studied patient population. In this context, formulary decision support did not significantly increase adherence. To impact cost-related non-adherence, formulary decision support will likely need to be paired with complementary drug benefit design. Formulary decision support should be studied further, with particular attention to its effect on adherence in the setting of different benefit designs.

  17. Pharmacokinetics of colistin and colistimethate sodium after a single 80-mg intravenous dose of CMS in young healthy volunteers.

    PubMed

    Couet, W; Grégoire, N; Gobin, P; Saulnier, P J; Frasca, D; Marchand, S; Mimoz, O

    2011-06-01

    Colistin pharmacokinetics (PK) was investigated in young healthy volunteers after a 1-h infusion of 80 mg (1 million international units (MIU)) of the prodrug colistin methanesulfonate (CMS). Concentration levels of CMS and colistin were determined in plasma and urine using a new chromatographic assay and analyzed simultaneously with a population approach after correcting the urine-related data for postexcretion hydrolysis of CMS into colistin. CMS and colistin have low volumes of distribution (14.0 and 12.4 liters, respectively), consistent with distribution being restricted to extracellular fluid. CMS is mainly excreted unchanged in urine (70% on average), with a typical renal clearance estimated at 103 ml/min-close to the glomerular filtration rate. Colistin elimination is essentially extrarenal, given that its renal clearance is 1.9 ml/min, consistent with extensive reabsorption. Colistin elimination is not limited by the formation rate because its half-life (3 h) is longer than that of CMS. The values of these pharmacokinetic parameters will serve as reference points for future comparisons with patients' data.

  18. 12 CFR 509.103 - Civil money penalties.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Company Act Violation 32,500 12 U.S.C. 1467a(i)(3) Holding Company Act Violation 32,500 12 U.S.C. 1467a(r)(1) Late/Inaccurate Reports—1st Tier 2,200 12 U.S.C. 1467a(r)(2) Late/Inaccurate Reports—2nd Tier 32,500 12 U.S.C. 1467a(r)(3) Late/Inaccurate Reports—3rd Tier 1,375,000 12 U.S.C. 1817(j)(16)(A) Change...

  19. 12 CFR 509.103 - Civil money penalties.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Company Act Violation 32,500 12 U.S.C. 1467a(i)(3) Holding Company Act Violation 32,500 12 U.S.C. 1467a(r)(1) Late/Inaccurate Reports—1st Tier 2,200 12 U.S.C. 1467a(r)(2) Late/Inaccurate Reports—2nd Tier 32,500 12 U.S.C. 1467a(r)(3) Late/Inaccurate Reports—3rd Tier 1,375,000 12 U.S.C. 1817(j)(16)(A) Change...

  20. 12 CFR 509.103 - Civil money penalties.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Company Act Violation 32,500 12 U.S.C. 1467a(i)(3) Holding Company Act Violation 32,500 12 U.S.C. 1467a(r)(1) Late/Inaccurate Reports—1st Tier 2,200 12 U.S.C. 1467a(r)(2) Late/Inaccurate Reports—2nd Tier 32,500 12 U.S.C. 1467a(r)(3) Late/Inaccurate Reports—3rd Tier 1,375,000 12 U.S.C. 1817(j)(16)(A) Change...

  1. 12 CFR 509.103 - Civil money penalties.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Company Act Violation 32,500 12 U.S.C. 1467a(i)(3) Holding Company Act Violation 32,500 12 U.S.C. 1467a(r)(1) Late/Inaccurate Reports—1st Tier 2,200 12 U.S.C. 1467a(r)(2) Late/Inaccurate Reports—2nd Tier 32,500 12 U.S.C. 1467a(r)(3) Late/Inaccurate Reports—3rd Tier 1,375,000 12 U.S.C. 1817(j)(16)(A) Change...

  2. 42 CFR 423.509 - Termination of contract by CMS.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...: (1) Termination of contract by CMS. (i) CMS notifies the Part D plan sponsor in writing at least 45... experiences financial difficulties so severe that its ability to make necessary health services available is...) CMS notifies the Part D plan sponsor in writing that its contract will be terminated on a date...

  3. Increasing Honest Responding on Cognitive Distortions in Child Molesters: The Bogus Pipeline Procedure

    ERIC Educational Resources Information Center

    Gannon, Theresa A.

    2006-01-01

    Professionals conclude that child molesters (CMs) hold offense-supportive beliefs (or cognitive distortions) from CMs' questionnaire responses. Because questionnaires are easily faked, we asked 32 CMs to complete a cognitive distortion scale under standard conditions (Time 1). A week later (Time 2), the same CMs completed the scale again. This…

  4. 42 CFR 417.801 - Agreements between CMS and health care prepayment plans.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Agreements between CMS and health care prepayment... CMS and health care prepayment plans. (a) General requirement. (1) In order to participate and receive... written agreement with CMS. (2) An existing group practice prepayment plan (GPPP) that continues as an...

  5. Response to Instruction in Preschool: Results of Two Randomized Studies with Children At Significant Risk of Reading Difficulties

    PubMed Central

    Lonigan, Christopher J.; Phillips, Beth M.

    2015-01-01

    Although response-to-instruction (RTI) approaches have received increased attention, few studies have evaluated the potential impacts of RTI approaches with preschool populations. This manuscript presents results of two studies examining impacts of Tier II instruction with preschool children. Participating children were identified as substantially delayed in the acquisition of early literacy skills despite exposure to high-quality, evidence-based classroom instruction. Study 1 included 93 children (M age = 58.2 months; SD = 3.62) attending 12 Title I preschools. Study 2 included 184 children (M age = 58.2 months; SD = 3.38) attending 19 Title I preschools. The majority of children were Black/African American, and about 60% were male. In both studies, eligible children were randomized to receive either 11 weeks of need-aligned, small-group instruction or just Tier I. Tier II instruction in Study 1 included variations of activities for code- and language-focused domains with prior evidence of efficacy in non-RTI contexts. Tier II instruction in Study 2 included instructional activities narrower in scope, more intensive, and delivered to smaller groups of children. Impacts of Tier II instruction in Study 1 were minimal; however, there were significant and moderate-to-large impacts in Study 2. These results identify effective Tier II instruction but indicate that the context in which children are identified may alter the nature of Tier II instruction that is required. Children identified as eligible for Tier II in an RTI framework likely require more intensive and more narrowly focused instruction than do children at general risk of later academic difficulties. PMID:26869730

  6. Managing the CMS Data and Monte Carlo Processing during LHC Run 2

    NASA Astrophysics Data System (ADS)

    Wissing, C.; CMS Collaboration

    2017-10-01

    In order to cope with the challenges expected during the LHC Run 2 CMS put in a number of enhancements into the main software packages and the tools used for centrally managed processing. In the presentation we will highlight these improvements that allow CMS to deal with the increased trigger output rate, the increased pileup and the evolution in computing technology. The overall system aims at high flexibility, improved operational flexibility and largely automated procedures. The tight coupling of workflow classes to types of sites has been drastically relaxed. Reliable and high-performing networking between most of the computing sites and the successful deployment of a data-federation allow the execution of workflows using remote data access. That required the development of a largely automatized system to assign workflows and to handle necessary pre-staging of data. Another step towards flexibility has been the introduction of one large global HTCondor Pool for all types of processing workflows and analysis jobs. Besides classical Grid resources also some opportunistic resources as well as Cloud resources have been integrated into that Pool, which gives reach to more than 200k CPU cores.

  7. A multi-tiered architecture for content retrieval in mobile peer-to-peer networks.

    DOT National Transportation Integrated Search

    2012-01-01

    In this paper, we address content retrieval in Mobile Peer-to-Peer (P2P) Networks. We design a multi-tiered architecture for content : retrieval, where at Tier 1, we design a protocol for content similarity governed by a parameter that trades accu...

  8. Self-Regulated Strategy Development as a Tier 2 Writing Intervention

    ERIC Educational Resources Information Center

    Johnson, Evelyn S.; Hancock, Christine; Carter, Deborah R.; Pool, Juli L.

    2013-01-01

    In a response to intervention framework, the implication of limited writing instruction suggests an immediate need for Tier 2 interventions to support struggling writers while at the same time addressing instructional gaps in Tier 1. Many schools struggle with implementing writing intervention, partly because of the limited number of…

  9. Response to Intervention with Secondary School Students with Reading Difficulties

    ERIC Educational Resources Information Center

    Vaughn, Sharon; Fletcher, Jack M.

    2012-01-01

    The authors summarize evidence from a multiyear study with secondary students with reading difficulties on (a) the potential efficacy of primary-level (Tier 1), secondary-level (Tier 2), and tertiary-level (Tier 3) interventions in remediating reading difficulties with middle school students, (b) the likelihood of resolving reading disabilities…

  10. Integrating Puppet and Gitolite to provide a novel solution for scalable system management at the MPPMU Tier2 centre

    NASA Astrophysics Data System (ADS)

    Delle Fratte, C.; Kennedy, J. A.; Kluth, S.; Mazzaferro, L.

    2015-12-01

    In a grid computing infrastructure tasks such as continuous upgrades, services installations and software deployments are part of an admins daily work. In such an environment tools to help with the management, provisioning and monitoring of the deployed systems and services have become crucial. As experiments such as the LHC increase in scale, the computing infrastructure also becomes larger and more complex. Moreover, today's admins increasingly work within teams that share responsibilities and tasks. Such a scaled up situation requires tools that not only simplify the workload on administrators but also enable them to work seamlessly in teams. In this paper will be presented our experience from managing the Max Planck Institute Tier2 using Puppet and Gitolite in a cooperative way to help the system administrator in their daily work. In addition to describing the Puppet-Gitolite system, best practices and customizations will also be shown.

  11. BridgeUP: STEM. Creating Opportunities for Women through Tiered Mentorship

    NASA Astrophysics Data System (ADS)

    Secunda, Amy; Cornelis, Juliette; Ferreira, Denelis; Gomez, Anay; Khan, Ariba; Li, Anna; Soo, Audrey; Mac Low, Mordecai

    2018-01-01

    BridgeUP: STEM is an ambitious, and exciting initiative responding to the extensive gender and opportunity gaps that exist in the STEM pipeline for women, girls, and under-resourced youth. BridgeUP: STEM has developed a distinct identity in the landscape of computer science education by embedding programming in the context of scientific research. One of the ways in which this is accomplished is through a tiered mentorship program. Five Helen Fellows are chosen from a pool of female, postbaccalaureate applicants to be mentored by researchers at the American Museum of Natural History in a computational research project. The Helen Fellows then act as mentors to six high school women (Brown Scholars), guiding them through a computational project aligned with their own research. This year, three of the Helen Fellows, and by extension, eighteen Brown Scholars, are performing computational astrophysics research. This poster presents one example of a tiered mentorship working on modeling the migration of stellar mass black holes (BH) in active galactic nucleus (AGN) disks. Making an analogy from the well-studied migration and formation of planets in protoplanetary disks to the newer field of migration and formation of binary BH in AGN disks, the Helen Fellow is working with her mentors to make the necessary adaptations of an N-body code incorporating migration torques from the protoplanetary disk case to the AGN disk case to model how binary BH form. This is in order to better understand and make predictions for gravitational wave observations from the Laser Interferometer Gravitational-Wave Observatory (LIGO). The Brown Scholars then implement the Helen Fellow’s code for a variety of different distributions of initial stellar mass BH populations that they generate using python, and produce visualizations of the output to be used in a published paper. Over the course of the project, students will develop a basic understanding of the physics related to their project and develop their practical computational skills.

  12. Burden of disease resulting from chronic mountain sickness among young Chinese male immigrants in Tibet

    PubMed Central

    2012-01-01

    Background In young Chinese men of the highland immigrant population, chronic mountain sickness (CMS) is a major public health problem. The aim of this study was to measure the disease burden of CMS in this population. Methods We used disability-adjusted life years (DALYs) to estimate the disease burden of CMS. Disability weights were derived using the person trade-off methodology. CMS diagnoses, symptom severity, and individual characteristics were obtained from surveys collected in Tibet in 2009 and 2010. The DALYs of individual patients and the DALYs/1,000 were calculated. Results Disability weights were obtained for 21 CMS health stages. The results of the analyses of the two surveys were consistent with each other. At different altitudes, the CMS rates ranged from 2.1-37.4%; the individual DALYs of patients ranged from 0.13-0.33, and the DALYs/1,000 ranged from 3.60-52.78. The age, highland service years, blood pressure, heart rate, smoking rate, and proportion of the sample working in engineering or construction were significantly higher in the CMS group than in the non-CMS group (p < 0.05). These variables were also positively associated with the individual DALYs (p < 0.05). Among the symptoms, headaches caused the largest proportion of DALYs. Conclusion The results show that CMS imposes a considerable burden on Chinese immigrants to Tibet. Immigrants with characteristics such as a higher residential altitude, more advanced age, longer highland service years, being a smoker, and working in engineering or construction were more likely to develop CMS and to increase the disease burden. Higher blood pressure and heart rate as a result of CMS were also positively associated with the disease burden. The authorities should pay attention to the highland disease burden and support the development and application of DALYs studies of CMS and other highland diseases. PMID:22672510

  13. Prospective Environmental Risk Assessment for Sediment-Bound Organic Chemicals: A Proposal for Tiered Effect Assessment.

    PubMed

    Diepens, Noël J; Koelmans, Albert A; Baveco, Hans; van den Brink, Paul J; van den Heuvel-Greve, Martine J; Brock, Theo C M

    A broadly accepted framework for prospective environmental risk assessment (ERA) of sediment-bound organic chemicals is currently lacking. Such a framework requires clear protection goals, evidence-based concepts that link exposure to effects and a transparent tiered-effect assessment. In this paper, we provide a tiered prospective sediment ERA procedure for organic chemicals in sediment, with a focus on the applicable European regulations and the underlying data requirements. Using the ecosystem services concept, we derived specific protection goals for ecosystem service providing units: microorganisms, benthic algae, sediment-rooted macrophytes, benthic invertebrates and benthic vertebrates. Triggers for sediment toxicity testing are discussed.We recommend a tiered approach (Tier 0 through Tier 3). Tier-0 is a cost-effective screening based on chronic water-exposure toxicity data for pelagic species and equilibrium partitioning. Tier-1 is based on spiked sediment laboratory toxicity tests with standard benthic test species and standardised test methods. If comparable chronic toxicity data for both standard and additional benthic test species are available, the Species Sensitivity Distribution (SSD) approach is a more viable Tier-2 option than the geometric mean approach. This paper includes criteria for accepting results of sediment-spiked single species toxicity tests in prospective ERA, and for the application of the SSD approach. We propose micro/mesocosm experiments with spiked sediment, to study colonisation success by benthic organisms, as a Tier-3 option. Ecological effect models can be used to supplement the experimental tiers. A strategy for unifying information from various tiers by experimental work and exposure-and effect modelling is provided.

  14. Harmonisation of Global Land-Use Scenarios for the Period 1500-2100 for IPCC-AR5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurtt, George; Chini, Louise Parsons; Frolking, Steve

    2009-06-01

    In preparation for the fifth Intergovernmental Panel on Climate Change climate change assessment (IPCC-AR5), the international community is developing new advanced computer models (CMs) to address the combined effects of human activities (e.g. land-use and fossil fuel emissions) on the carbon-climate system. In addition, four Representative Concentration Pathway (RCP) scenarios of the future (2005-2100) are being developed by four Integrated Assessment Modeling teams (IAMs) to be used as input to the CMs for future climate projections. The diversity of requirements and approaches among CMs and IAMs for tracking land-use changes (past, present, and future), presents major challenges for treating land-usemore » comprehensively and consistently between these communities. As part of an international working group, we have been working to meet these challenges by developing a "harmonized" set of land-use change scenarios that smoothly connects gridded historical reconstructions of land-use with future projections, in a format required by CMs. This approach to harmonizing the treatment of land-use between two key modeling communities, CMs and IAMs, represents a major advance that will facilitate more consistent and fuller treatments of land-use/land-use change effects including both CO2 emissions and corresponding land-surface changes.« less

  15. Development and implementation of an Integrated Water Resources Management System (IWRMS)

    NASA Astrophysics Data System (ADS)

    Flügel, W.-A.; Busch, C.

    2011-04-01

    One of the innovative objectives in the EC project BRAHMATWINN was the development of a stakeholder oriented Integrated Water Resources Management System (IWRMS). The toolset integrates the findings of the project and presents it in a user friendly way for decision support in sustainable integrated water resources management (IWRM) in river basins. IWRMS is a framework, which integrates different types of basin information and which supports the development of IWRM options for climate change mitigation. It is based on the River Basin Information System (RBIS) data models and delivers a graphical user interface for stakeholders. A special interface was developed for the integration of the enhanced DANUBIA model input and the NetSyMod model with its Mulino decision support system (mulino mDss) component. The web based IWRMS contains and combines different types of data and methods to provide river basin data and information for decision support. IWRMS is based on a three tier software framework which uses (i) html/javascript at the client tier, (ii) PHP programming language to realize the application tier, and (iii) a postgresql/postgis database tier to manage and storage all data, except the DANUBIA modelling raw data, which are file based and registered in the database tier. All three tiers can reside on one or different computers and are adapted to the local hardware infrastructure. IWRMS as well as RBIS are based on Open Source Software (OSS) components and flexible and time saving access to that database is guaranteed by web-based interfaces for data visualization and retrieval. The IWRMS is accessible via the BRAHMATWINN homepage: http://www.brahmatwinn.uni-jena.de and a user manual for the RBIS is available for download as well.

  16. 77 FR 35437 - Self-Regulatory Organizations; NYSE MKT LLC; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-13

    ... Customer Contracts traded tiers and the associated Rights Fee: Monthly Average national daily customer...,000 750 Over 100,000 1,500 The Exchange proposes to amend the tiers and fees as follows: Monthly...,001 to 5,000 200 5,001 to 15,000 375 15,001 to 100,000 750 Over 100,000 1,500 The 0-to-200 tier will...

  17. The Brief Classroom Interaction Observation-Revised: An Observation System to Inform and Increase Teacher Use of Universal Classroom Management Practices

    ERIC Educational Resources Information Center

    Reinke, Wendy M.; Stormont, Melissa; Herman, Keith C.; Wachsmuth, Sean; Newcomer, Lori

    2015-01-01

    Schools are increasingly using multi-tiered prevention models to address the academic and behavior needs of students. The foundation of these models is the implementation of universal, or Tier 1, practices designed to support the academic and behavioral needs of the vast majority of students. To support teachers in the use of effective Tier 1…

  18. CMS-dependent prognostic impact of KRAS and BRAFV600E mutations in primary colorectal cancer.

    PubMed

    Smeby, J; Sveen, A; Merok, M A; Danielsen, S A; Eilertsen, I A; Guren, M G; Dienstmann, R; Nesbakken, A; Lothe, R A

    2018-05-01

    The prognostic impact of KRAS and BRAFV600E mutations in primary colorectal cancer (CRC) varies with microsatellite instability (MSI) status. The gene expression-based consensus molecular subtypes (CMSs) of CRC define molecularly and clinically distinct subgroups, and represent a novel stratification framework in biomarker analysis. We investigated the prognostic value of these mutations within the CMS groups. Totally 1197 primary tumors from a Norwegian series of CRC stage I-IV were analyzed for MSI and mutation status in hotspots in KRAS (codons 12, 13 and 61) and BRAF (codon 600). A subset was analyzed for gene expression and confident CMS classification was obtained for 317 samples. This cohort was expanded with clinical and molecular data, including CMS classification, from 514 patients in the publically available dataset GSE39582. Gene expression signatures associated with KRAS and BRAFV600E mutations were used to evaluate differential impact of mutations on gene expression among the CMS groups. BRAFV600E and KRAS mutations were both associated with inferior 5-year overall survival (OS) exclusively in MSS tumors (BRAFV600E mutation versus KRAS/BRAF wild-type: Hazard ratio (HR) 2.85, P < 0.001; KRAS mutation versus KRAS/BRAF wild-type: HR 1.30, P = 0.013). BRAFV600E-mutated MSS tumors were strongly enriched and associated with metastatic disease in CMS1, leading to negative prognostic impact in this subtype (OS: BRAFV600E mutation versus wild-type: HR 7.73, P = 0.001). In contrast, the poor prognosis of KRAS mutations was limited to MSS tumors with CMS2/CMS3 epithelial-like gene expression profiles (OS: KRAS mutation versus wild-type: HR 1.51, P = 0.011). The subtype-specific prognostic associations were substantiated by differential effects of BRAFV600E and KRAS mutations on gene expression signatures according to the MSI status and CMS group. BRAFV600E mutations are enriched and associated with metastatic disease in CMS1 MSS tumors, leading to poor prognosis in this subtype. KRAS mutations are associated with adverse outcome in epithelial (CMS2/CMS3) MSS tumors.

  19. Is Computer-Aided Instruction an Effective Tier-One Intervention for Kindergarten Students at Risk for Reading Failure in an Applied Setting?

    ERIC Educational Resources Information Center

    Kreskey, Donna DeVaughn; Truscott, Stephen D.

    2016-01-01

    This study investigated the use of computer-aided instruction (CAI) as an intervention for kindergarten students at risk for reading failure. Headsprout Early Reading (Headsprout 2005), a type of CAI, provides internet-based, reading instruction incorporating the critical components of reading instruction cited by the National Reading Panel (NRP…

  20. Towards Wearable Cognitive Assistance

    DTIC Science & Technology

    2013-12-01

    ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: mobile computing, cloud...It presents a muli-tiered mobile system architecture that offers tight end-to-end latency bounds on compute-intensive cognitive assistance...to an entire neighborhood or an entire city is extremely expensive and time-consuming. Physical infrastructure in public spaces tends to evolve very

  1. Managing a tier-2 computer centre with a private cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-06-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.

  2. CDX2 prognostic value in stage II/III resected colon cancer is related to CMS classification.

    PubMed

    Pilati, C; Taieb, J; Balogoun, R; Marisa, L; de Reyniès, A; Laurent-Puig, P

    2017-05-01

    Caudal-type homeobox transcription factor 2 (CDX2) is involved in colon cancer (CC) oncogenesis and has been proposed as a prognostic biomarker in patients with stage II or III CC. We analyzed CDX2 expression in a series of 469 CC typed for the new international consensus molecular subtype (CMS) classification, and we confirmed results in a series of 90 CC. Here, we show that lack of CDX2 expression is only present in the mesenchymal subgroup (CMS4) and in MSI-immune tumors (CMS1) and not in CMS2 and CMS3 colon cancer. Although CDX2 expression was a globally independent prognostic factor, loss of CDX2 expression is not associated with a worse prognosis in the CMS1 group, but is highly prognostic in CMS4 patients for both relapse free and overall survival. Similarly, lack of CDX2 expression was a bad prognostic factor in MSS patients, but not in MSI. Our work suggests that combination of the consensual CMS classification and lack of CDX2 expression could be a useful marker to identify CMS4/CDX2-negative patients with a very poor prognosis. © The Author 2017. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  3. Integrating Model-Based Transmission Reduction into a multi-tier architecture

    NASA Astrophysics Data System (ADS)

    Straub, J.

    A multi-tier architecture consists of numerous craft as part of the system, orbital, aerial, and surface tiers. Each tier is able to collect progressively greater levels of information. Generally, craft from lower-level tiers are deployed to a target of interest based on its identification by a higher-level craft. While the architecture promotes significant amounts of science being performed in parallel, this may overwhelm the computational and transmission capabilities of higher-tier craft and links (particularly the deep space link back to Earth). Because of this, a new paradigm in in-situ data processing is required. Model-based transmission reduction (MBTR) is such a paradigm. Under MBTR, each node (whether a single spacecraft in orbit of the Earth or another planet or a member of a multi-tier network) is given an a priori model of the phenomenon that it is assigned to study. It performs activities to validate this model. If the model is found to be erroneous, corrective changes are identified, assessed to ensure their significance for being passed on, and prioritized for transmission. A limited amount of verification data is sent with each MBTR assertion message to allow those that might rely on the data to validate the correct operation of the spacecraft and MBTR engine onboard. Integrating MBTR with a multi-tier framework creates an MBTR hierarchy. Higher levels of the MBTR hierarchy task lower levels with data collection and assessment tasks that are required to validate or correct elements of its model. A model of the expected conditions is sent to the lower level craft; which then engages its own MBTR engine to validate or correct the model. This may include tasking a yet lower level of craft to perform activities. When the MBTR engine at a given level receives all of its component data (whether directly collected or from delegation), it randomly chooses some to validate (by reprocessing the validation data), performs analysis and sends its own results (v- lidation and/or changes of model elements and supporting validation data) to its upstream node. This constrains data transmission to only significant (either because it includes a change or is validation data critical for assessing overall performance) information and reduces the processing requirements (by not having to process insignificant data) at higher-level nodes. This paper presents a framework for multi-tier MBTR and two demonstration mission concepts: an Earth sensornet and a mission to Mars. These multi-tier MBTR concepts are compared to a traditional mission approach.

  4. Congenital myasthenic syndrome with tubular aggregates caused by GFPT1 mutations.

    PubMed

    Guergueltcheva, Velina; Müller, Juliane S; Dusl, Marina; Senderek, Jan; Oldfors, Anders; Lindbergh, Christopher; Maxwell, Susan; Colomer, Jaume; Mallebrera, Cecilia Jimenez; Nascimento, Andres; Vilchez, Juan J; Muelas, Nuria; Kirschner, Janbernd; Nafissi, Shahriar; Kariminejad, Ariana; Nilipour, Yalda; Bozorgmehr, Bita; Najmabadi, Hossein; Rodolico, Carmelo; Sieb, Jörn P; Schlotter, Beate; Schoser, Benedikt; Herrmann, Ralf; Voit, Thomas; Steinlein, Ortrud K; Najafi, Abdolhamid; Urtizberea, Andoni; Soler, Doriette M; Muntoni, Francesco; Hanna, Michael G; Chaouch, Amina; Straub, Volker; Bushby, Kate; Palace, Jacqueline; Beeson, David; Abicht, Angela; Lochmüller, Hanns

    2012-05-01

    Congenital myasthenic syndrome (CMS) is a clinically and genetically heterogeneous group of inherited disorders of the neuromuscular junction. A difficult to diagnose subgroup of CMS is characterised by proximal muscle weakness and fatigue while ocular and facial involvement is only minimal. DOK7 mutations have been identified as causing the disorder in about half of the cases. More recently, using classical positional cloning, we have identified mutations in a previously unrecognised CMS gene, GFPT1, in a series of DOK7-negative cases. However, detailed description of clinical features of GFPT1 patients has not been reported yet. Here we describe the clinical picture of 24 limb-girdle CMS (LG-CMS) patients and pathological findings of 18 of them, all carrying GFPT1 mutations. Additional patients with CMS, but without tubular aggregates, and patients with non-fatigable weakness with tubular aggregates were also screened. In most patients with GFPT1 mutations, onset of the disease occurs in the first decade of life with characteristic limb-girdle weakness and fatigue. A common feature was beneficial and sustained response to acetylcholinesterase inhibitor treatment. Most of the patients who had a muscle biopsy showed tubular aggregates in myofibers. Analysis of endplate morphology in one of the patients revealed unspecific abnormalities. Our study delineates the phenotype of CMS associated with GFPT1 mutations and expands the understanding of neuromuscular junction disorders. As tubular aggregates in context of a neuromuscular transmission defect appear to be highly indicative, we suggest calling this condition congenital myasthenic syndrome with tubular aggregates (CMS-TA).

  5. Morphometric analysis of stab wounds by MSCT and MRI after the instillation of contrast medium.

    PubMed

    Fais, Paolo; Cecchetto, Giovanni; Boscolo-Berto, Rafael; Toniolo, Matteo; Viel, Guido; Miotto, Diego; Montisci, Massimo; Tagliaro, Franco; Giraudo, Chiara

    2016-06-01

    To analyze the morphology and depth of stab wounds experimentally produced on human legs amputated for medical reasons using multislice computed tomography (MSCT) and magnetic resonance imaging (MRI) after the instillation of a single contrast medium solution (CMS). For morphological analysis, MSCT and MRI scans were performed before and after the instillation of CMS into the wound cavity. Depth measurements were performed on the sagittal view only after CMS instillation. Subsequently, each wound was dissected using the layer-by-layer technique and the depth was measured by a ruler. One-way between-groups pairwise analysis of variance (ANOVA) and Bland-Altman plot analysis were used for comparing radiological and anatomical measurements. Unenhanced MSCT images did not identify the wound channels, whereas unenhanced MRI evidenced the wound cavity in 50 % of cases. After the instillation of CMS, both MSCT and MRI depicted the wound channel in all the investigated stabbings, although the morphology of the cavity was irregular and did not resemble the shape of the blade. The radiological measurements of the wounds' depth, after the application of CMS, exhibited a high level of agreement (about 95 % at Bland-Altman plot analysis) with the anatomical measurements at dissection. A similar systematic underestimation, however, has been evidenced for MSCT (average 11.4 %; 95 % CI 7-17) and MRI (average 9.6 %; 95 % CI 6-13) data after the instillation of CMS with respect to wound dissection measurements. MSCT and MRI after the instillation of CMS can be used for depicting the morphometric features of stab wounds, although depth measurements are affected by a slight systematic underestimation compared to layer-by-layer dissection.

  6. Comparative meta-analysis and experimental kinetic investigation of column and batch bottle microcosm treatability studies informing in situ groundwater remedial design.

    PubMed

    Driver, Erin M; Roberts, Jeff; Dollar, Peter; Charles, Maurissa; Hurst, Paul; Halden, Rolf U

    2017-02-05

    A systematic comparison was performed between batch bottle and continuous-flow column microcosms (BMs and CMs, respectively) commonly used for in situ groundwater remedial design. Review of recent literature (2000-2014) showed a preference for reporting batch kinetics, even when corresponding column data were available. Additionally, CMs produced higher observed rate constants, exceeding those of BMs by a factor of 6.1±1.1 standard error. In a subsequent laboratory investigation, 12 equivalent microcosm pairs were constructed from fractured bedrock and perchloroethylene (PCE) impacted groundwater. First-order PCE transformation kinetics of CMs were 8.0±4.8 times faster than BMs (rates: 1.23±0.87 vs. 0.16±0.05d -1 , respectively). Additionally, CMs transformed 16.1±8.0-times more mass than BMs owing to continuous-feed operation. CMs are concluded to yield more reliable kinetic estimates because of much higher data density stemming from long-term, steady-state conditions. Since information from BMs and CMs is valuable and complementary, treatability studies should report kinetic data from both when available. This first systematic investigation of BMs and CMs highlights the need for a more unified framework for data use and reporting in treatability studies informing decision-making for field-scale groundwater remediation. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Cerebral cavernous malformations: natural history and clinical management.

    PubMed

    Gross, Bradley A; Du, Rose

    2015-01-01

    Cavernous malformations (CMs) are angiographically-occult clusters of dilated sinusoidal channels that may present clinically with seizures, focal neurological deficits and/or hemorrhage. Across natural history studies, the annual hemorrhage rate ranged from 1.6-3.1% per patient-year, decreasing to 0.08-0.2% per patient-year for incidental CMs and to 0.3-0.6% for the collective group of unruptured CMs. Prior hemorrhage is a significant risk factor for subsequent CM hemorrhage. Hemorrhage clustering, particularly within the first 2 years, is an established phenomenon that may confound results of natural history studies evaluating the rate of rehemorrhage. Indeed, rehemorrhage rates for hemorrhagic CMs range from 4.5-22.9% in the literature. Surgical resection is the gold standard treatment for surgically-accessible, symptomatic CMs. Incidental CMs or minimally symptomatic, surgically inaccessible eloquent lesions may be considered for observation. Stereotactic radiosurgery is a controversial treatment approach of consideration only for cases of highly aggressive, surgically inaccessible CMs.

  8. 42 CFR 423.509 - Termination of contract by CMS.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...: (1) Termination of contract by CMS. (i) CMS notifies the Part D plan in writing 90 days before the... difficulties so severe that its ability to make necessary health services available is impaired to the point of... writing that its contract will be terminated on a date specified by CMS. If a termination in is effective...

  9. 42 CFR 423.509 - Termination of contract by CMS.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...: (1) Termination of contract by CMS. (i) CMS notifies the Part D plan in writing 90 days before the... difficulties so severe that its ability to make necessary health services available is impaired to the point of... writing that its contract will be terminated on a date specified by CMS. If a termination in is effective...

  10. 42 CFR 433.320 - Procedures for refunds to CMS.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Procedures for refunds to CMS. 433.320 Section 433... Overpayments to Providers § 433.320 Procedures for refunds to CMS. (a) Basic requirements. (1) The agency must refund the Federal share of overpayments that are subject to recovery to CMS through a credit on its...

  11. 42 CFR 447.256 - Procedures for CMS action on assurances and State plan amendments.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Procedures for CMS action on assurances and State... for Inpatient Hospital and Long-Term Care Facility Services Payment Rates § 447.256 Procedures for CMS action on assurances and State plan amendments. (a) Criteria for approval. (1) CMS approval action on...

  12. 42 CFR 438.730 - Sanction by CMS: Special rules for MCOs

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Sanction by CMS: Special rules for MCOs 438.730... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS MANAGED CARE Sanctions § 438.730 Sanction by CMS: Special rules for MCOs (a) Basis for sanction. (1) A State agency may recommend that CMS impose the denial of...

  13. 40 CFR Appendix A to Subpart Ll of... - Applicability of General Provisions

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... not require CMS or CMS performance evaluation. 63.8(e) Performance evaluation for CMS No 63.9(b)(1)-(5... evaluations No Subpart LL does not require performance evaluation for CMS. 63.11(a)-(b) Control device... Appendix A to Subpart LL of Part 63 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED...

  14. 29 CFR Appendix C to Part 510 - Government Corporations Eligible for Minimum Wage Phase-In

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... Organizations for which no data were provided are subject to Tier 1 treatment. Tier Organization 1 Automobile Accidents Compensation Administration. 1 Cardiovascular Center Corporation of Puerto Rico and the Caribbean...

  15. Examination of the Mechanisms Underlying Effectiveness of the Turtle Technique

    ERIC Educational Resources Information Center

    Drogan, Robin R.; Kern, Lee

    2014-01-01

    A significant number of young children exhibit challenging behaviors in preschool settings. A tiered framework of intervention has documented effectiveness in elementary and secondary schools, and recently has been extended to preschool settings. Although there is emerging research to support the effectiveness of Tier 1 (universal) and Tier 3…

  16. 33 CFR 154.1135 - Response plan development and evaluation criteria.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Operating in Prince William Sound, Alaska § 154.1135 Response plan development and evaluation criteria. The following response times must be used in determining the on scene arrival time in Prince William Sound for the response resources required by § 154.1045: Tier 1 (hrs.) Tier 2 (hrs.) tier 3 (hrs.) Prince...

  17. 20 CFR 209.14 - Report of separation allowances subject to tier II taxation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Report of separation allowances subject to tier II taxation. 209.14 Section 209.14 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER... separation allowances subject to tier II taxation. For any employee who is paid a separation payment, the...

  18. To What Interventions Are Students Responding?

    ERIC Educational Resources Information Center

    Lipson, Marjorie Y.; Wixson, Karen K.

    2012-01-01

    Intervention is a central tenet of the various (multitiered) approaches used to implement Response to Intervention (RTI). It appears in Tier 1 core instruction in the form of differentiation, in Tier 2 in the form of supplemental small groups, and in Tier 3 and 4 instruction in the form of more intensive, often individualized support from…

  19. Using Brief Experimental Analysis to Intensify Tier 3 Reading Interventions

    ERIC Educational Resources Information Center

    Coolong-Chaffin, Melissa; Wagner, Dana

    2015-01-01

    As implementation of multi-tiered systems of support becomes common practice across the nation, practitioners continue to need strategies for intensifying interventions and supports for the subset of students who fail to make adequate progress despite strong programs at Tiers 1 and 2. Experts recommend making several changes to the structure and…

  20. Examining Proportional Representation of Ethnic Groups within the SWPBIS Model

    ERIC Educational Resources Information Center

    Jewell, Kelly

    2012-01-01

    The quantitative study seeks to analyze if School-wide Positive Behavior Intervention and Support (SWPBIS) model reduces the likelihood that minority students will receive more individualized supports due to behavior problems. In theory, the SWPBIS model should reflect a 3-tier system with tier 1 representing approximately 80%, tier 2 representing…

  1. Positive Behavior Supports: Tier 2 Interventions in Middle Schools

    ERIC Educational Resources Information Center

    Hoyle, Carol G.; Marshall, Kathleen J.; Yell, Mitchell L.

    2011-01-01

    School personnel are using Schoolwide Positive Behavior Supports in public schools throughout the United States. A number of studies have evaluated the universal level, or Tier 1, of Schoolwide Positive Behavior Supports. In this study, the authors describe and analyze the interventions offered as options for use for Tier 2 in middle schools…

  2. 20 CFR 209.14 - Report of separation allowances subject to tier II taxation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Report of separation allowances subject to tier II taxation. 209.14 Section 209.14 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER... separation allowances subject to tier II taxation. For any employee who is paid a separation payment, the...

  3. Differences in Contractile Function of Myofibrils within Human Embryonic Stem Cell-Derived Cardiomyocytes vs. Adult Ventricular Myofibrils Are Related to Distinct Sarcomeric Protein Isoforms

    PubMed Central

    Iorga, Bogdan; Schwanke, Kristin; Weber, Natalie; Wendland, Meike; Greten, Stephan; Piep, Birgit; dos Remedios, Cristobal G.; Martin, Ulrich; Zweigerdt, Robert; Kraft, Theresia; Brenner, Bernhard

    2018-01-01

    Characterizing the contractile function of human pluripotent stem cell-derived cardiomyocytes (hPSC-CMs) is key for advancing their utility for cellular disease models, promoting cell based heart repair, or developing novel pharmacological interventions targeting cardiac diseases. The aim of the present study was to understand whether steady-state and kinetic force parameters of β-myosin heavy chain (βMyHC) isoform-expressing myofibrils within human embryonic stem cell-derived cardiomyocytes (hESC-CMs) differentiated in vitro resemble those of human ventricular myofibrils (hvMFs) isolated from adult donor hearts. Contractile parameters were determined using the same micromechanical method and experimental conditions for both types of myofibrils. We identified isoforms and phosphorylation of main sarcomeric proteins involved in the modulation of force generation of both, chemically demembranated hESC-CMs (d-hESC-CMs) and hvMFs. Our results indicate that at saturating Ca2+ concentration, both human-derived contractile systems developed forces with similar rate constants (0.66 and 0.68 s−1), reaching maximum isometric force that was significantly smaller for d-hESC-CMs (42 kPa) than for hvMFs (94 kPa). At submaximal Ca2+-activation, where intact cardiomyocytes normally operate, contractile parameters of d-hESC-CMs and hvMFs exhibited differences. Ca2+ sensitivity of force was higher for d-hESC-CMs (pCa50 = 6.04) than for hvMFs (pCa50 = 5.80). At half-maximum activation, the rate constant for force redevelopment was significantly faster for d-hESC-CMs (0.51 s−1) than for hvMFs (0.28 s−1). During myofibril relaxation, kinetics of the slow force decay phase were significantly faster for d-hESC-CMs (0.26 s−1) than for hvMFs (0.21 s−1), while kinetics of the fast force decay were similar and ~20x faster. Protein analysis revealed that hESC-CMs had essentially no cardiac troponin-I, and partially non-ventricular isoforms of some other sarcomeric proteins, explaining the functional discrepancies. The sarcomeric protein isoform pattern of hESC-CMs had features of human cardiomyocytes at an early developmental stage. The study indicates that morphological and ultrastructural maturation of βMyHC isoform-expressing hESC-CMs is not necessarily accompanied by ventricular-like expression of all sarcomeric proteins. Our data suggest that hPSC-CMs could provide useful tools for investigating inherited cardiac diseases affecting contractile function during early developmental stages. PMID:29403388

  4. The immature electrophysiological phenotype of iPSC-CMs still hampers in vitro drug screening: Special focus on IK1.

    PubMed

    Goversen, Birgit; van der Heyden, Marcel A G; van Veen, Toon A B; de Boer, Teun P

    2018-03-01

    Preclinical drug screens are not based on human physiology, possibly complicating predictions on cardiotoxicity. Drug screening can be humanised with in vitro assays using human induced pluripotent stem cell-derived cardiomyocytes (iPSC-CMs). However, in contrast to adult ventricular cardiomyocytes, iPSC-CMs beat spontaneously due to presence of the pacemaking current I f and reduced densities of the hyperpolarising current I K1 . In adult cardiomyocytes, I K1 finalises repolarisation by stabilising the resting membrane potential while also maintaining excitability. The reduced I K1 density contributes to proarrhythmic traits in iPSC-CMs, which leads to an electrophysiological phenotype that might bias drug responses. The proarrhythmic traits can be suppressed by increasing I K1 in a balanced manner. We systematically evaluated all studies that report strategies to mature iPSC-CMs and found that only few studies report I K1 current densities. Furthermore, these studies did not succeed in establishing sufficient I K1 levels as they either added too little or too much I K1 . We conclude that reduced densities of I K1 remain a major flaw in iPSC-CMs, which hampers their use for in vitro drug screening. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Strain-Specific V3 and CD4 Binding Site Autologous HIV-1 Neutralizing Antibodies Select Neutralization-Resistant Viruses.

    PubMed

    Moody, M Anthony; Gao, Feng; Gurley, Thaddeus C; Amos, Joshua D; Kumar, Amit; Hora, Bhavna; Marshall, Dawn J; Whitesides, John F; Xia, Shi-Mao; Parks, Robert; Lloyd, Krissey E; Hwang, Kwan-Ki; Lu, Xiaozhi; Bonsignori, Mattia; Finzi, Andrés; Vandergrift, Nathan A; Alam, S Munir; Ferrari, Guido; Shen, Xiaoying; Tomaras, Georgia D; Kamanga, Gift; Cohen, Myron S; Sam, Noel E; Kapiga, Saidi; Gray, Elin S; Tumba, Nancy L; Morris, Lynn; Zolla-Pazner, Susan; Gorny, Miroslaw K; Mascola, John R; Hahn, Beatrice H; Shaw, George M; Sodroski, Joseph G; Liao, Hua-Xin; Montefiori, David C; Hraber, Peter T; Korber, Bette T; Haynes, Barton F

    2015-09-09

    The third variable (V3) loop and the CD4 binding site (CD4bs) of the HIV-1 envelope are frequently targeted by neutralizing antibodies (nAbs) in infected individuals. In chronic infection, HIV-1 escape mutants repopulate the plasma, and V3 and CD4bs nAbs emerge that can neutralize heterologous tier 1 easy-to-neutralize but not tier 2 difficult-to-neutralize HIV-1 isolates. However, neutralization sensitivity of autologous plasma viruses to this type of nAb response has not been studied. We describe the development and evolution in vivo of antibodies distinguished by their target specificity for V3 and CD4bs epitopes on autologous tier 2 viruses but not on heterologous tier 2 viruses. A surprisingly high fraction of autologous circulating viruses was sensitive to these antibodies. These findings demonstrate a role for V3 and CD4bs antibodies in constraining the native envelope trimer in vivo to a neutralization-resistant phenotype, explaining why HIV-1 transmission generally occurs by tier 2 neutralization-resistant viruses. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. The HEPCloud Facility: elastic computing for High Energy Physics - The NOvA Use Case

    NASA Astrophysics Data System (ADS)

    Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Norman, A.; Timm, S.; Tiradani, A.

    2017-10-01

    The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when needed. To address this issue, the HEPCloud project was launched by the Fermilab Scientific Computing Division in June 2015. Its goal is to develop a facility that provides a common interface to a variety of resources, including local clusters, grids, high performance computers, and community and commercial Clouds. Initially targeted experiments include CMS and NOvA, as well as other Fermilab stakeholders. In its first phase, the project has demonstrated the use of the “elastic” provisioning model offered by commercial clouds, such as Amazon Web Services. In this model, resources are rented and provisioned automatically over the Internet upon request. In January 2016, the project demonstrated the ability to increase the total amount of global CMS resources by 58,000 cores from 150,000 cores - a 38 percent increase - in preparation for the Recontres de Moriond. In March 2016, the NOvA experiment has also demonstrated resource burst capabilities with an additional 7,300 cores, achieving a scale almost four times as large as the local allocated resources and utilizing the local AWS s3 storage to optimize data handling operations and costs. NOvA was using the same familiar services used for local computations, such as data handling and job submission, in preparation for the Neutrino 2016 conference. In both cases, the cost was contained by the use of the Amazon Spot Instance Market and the Decision Engine, a HEPCloud component that aims at minimizing cost and job interruption. This paper describes the Fermilab HEPCloud Facility and the challenges overcome for the CMS and NOvA communities.

  7. Assessing the Nutritional Quality of Diets of Canadian Adults Using the 2014 Health Canada Surveillance Tool Tier System.

    PubMed

    Jessri, Mahsa; Nishi, Stephanie K; L'Abbé, Mary R

    2015-12-12

    The 2014 Health Canada Surveillance Tool (HCST) was developed to assess adherence of dietary intakes with Canada's Food Guide. HCST classifies foods into one of four Tiers based on thresholds for sodium, total fat, saturated fat and sugar, with Tier 1 representing the healthiest and Tier 4 foods being the unhealthiest. This study presents the first application of HCST to assess (a) dietary patterns of Canadians; and (b) applicability of this tool as a measure of diet quality among 19,912 adult participants of Canadian Community Health Survey 2.2. Findings indicated that even though most of processed meats and potatoes were Tier 4, the majority of reported foods in general were categorized as Tiers 2 and 3 due to the adjustable lenient criteria used in HCST. Moving from the 1st to the 4th quartile of Tier 4 and "other" foods/beverages, there was a significant trend towards increased calories (1876 kcal vs. 2290 kcal) and "harmful" nutrients (e.g., sodium) as well as decreased "beneficial" nutrients. Compliance with the HCST was not associated with lower body mass index. Future nutrient profiling systems need to incorporate both "positive" and "negative" nutrients, an overall score and a wider range of nutrient thresholds to better capture food product differences.

  8. Dynamics of low velocity collisions of ice particle, coated with frost

    NASA Technical Reports Server (NTRS)

    Bridges, F.; Lin, D.; Boone, L.; Darknell, D.

    1991-01-01

    We continued our investigations of low velocity collisions of ice particles for velocities in range 10(exp -3) - 2 cm/s. The work focused on two effects: (1) the sticking forces for ice particles coated with CO2 frost, and (2) the completion of a 2-D pendulum system for glancing collisions. A new computer software was also developed to control and monitor the position of the 2-D pendulum.

  9. Self-assembly behaviour of colistin and its prodrug colistin methanesulfonate: implications for solution stability and solubilization

    PubMed Central

    Wallace, Stephanie J.; Li, Jian; Nation, Roger L.; Prankerd, Richard J.; Velkov, Tony; Boyd, Ben J.

    2010-01-01

    Colistin is an amphiphilic antibiotic that has re-emerged into clinical use due to the increasing prevalence of difficult-to-treat Gram-negative infections. The existence of self-assembling colloids in solutions of colistin and its derivative prodrug, colistin methanesulfonate (CMS) was investigated. Colistin and CMS reduced the air-water interfacial tension, and dynamic light scattering (DLS) studies showed the existence of 2.07 ± 0.3 nm aggregates above 1.5 mM for colistin, and of 1.98 ± 0.36 nm aggregates for CMS above 3.5 mM (mean ± SD). Above the respective critical micelle concentrations (CMC) the solubility of azithromycin, a hydrophobic antibiotic, increased approximately linearly with increasing surfactant concentration (5:1 mol ratio colistin:azithromycin), suggestive of hydrophobic domains within the micellar cores. Rapid conversion of CMS to colistin occurred below the CMC (60 % over 48 hr), while conversion above the CMC was less than 1 %. The formation of colistin and CMS micelles demonstrated in this study is the proposed mechanism for solubilization of azithromycin and the concentration-dependent stability of CMS. PMID:20302384

  10. Effect of E85 on Tailpipe Emissions from Light-Duty Vehicles.

    PubMed

    Yanowitz, Janet; McCormick, Robert L

    2009-02-01

    E85, which consists of nominally 85% fuel grade ethanol and 15% gasoline, must be used in flexible-fuel (or "flex-fuel") vehicles (FFVs) that can operate on fuel with an ethanol content of 0-85%. Published studies include measurements of the effect of E85 on tailpipe emissions for Tier 1 and older vehicles. Car manufacturers have also supplied a large body of FFV certification data to the U.S. Environmental Protection Agency, primarily on Tier 2 vehicles. These studies and certification data reveal wide variability in the effects of E85 on emissions from different vehicles. Comparing Tier 1 FFVs running on E85 to similar non-FFVs running on gasoline showed, on average, significant reductions in emissions of oxides of nitrogen (NO x ; 54%), non-methane hydrocarbons (NMHCs; 27%), and carbon monoxide (CO; 18%) for E85. Comparing Tier 2 FFVs running on E85 and comparable non-FFVs running on gasoline shows, for E85 on average, a signifi-cant reduction in emissions of CO (20%), and no signifi-cant effect on emissions of non-methane organic gases (NMOGs). NO x emissions from Tier 2 FFVs averaged approximately 28% less than comparable non-FFVs. However, perhaps because of the wide range of Tier 2 NO x standards, the absolute difference in NO x emissions between Tier 2 FFVs and non-FFVs is not significant (P =0.28). It is interesting that Tier 2 FFVs operating on gasoline produced approximately 13% less NMOGs than non-FFVs operating on gasoline. The data for Tier 1 vehicles show that E85 will cause significant reductions in emissions of benzene and butadiene, and significant increases in emissions of formaldehyde and acetaldehyde, in comparison to emissions from gasoline in both FFVs and non-FFVs. The compound that makes up the largest proportion of organic emissions from E85-fueled FFVs is ethanol.

  11. The 2016 ACCP Pharmacotherapy Didactic Curriculum Toolkit.

    PubMed

    Schwinghammer, Terry L; Crannage, Andrew J; Boyce, Eric G; Bradley, Bridget; Christensen, Alyssa; Dunnenberger, Henry M; Fravel, Michelle; Gurgle, Holly; Hammond, Drayton A; Kwon, Jennifer; Slain, Douglas; Wargo, Kurt A

    2016-11-01

    The 2016 American College of Clinical Pharmacy (ACCP) Educational Affairs Committee was charged with updating and contemporizing ACCP's 2009 Pharmacotherapy Didactic Curriculum Toolkit. The toolkit has been designed to guide schools and colleges of pharmacy in developing, maintaining, and modifying their curricula. The 2016 committee reviewed the recent medical literature and other documents to identify disease states that are responsive to drug therapy. Diseases and content topics were organized by organ system, when feasible, and grouped into tiers as defined by practice competency. Tier 1 topics should be taught in a manner that prepares all students to provide collaborative, patient-centered care upon graduation and licensure. Tier 2 topics are generally taught in the professional curriculum, but students may require additional knowledge or skills after graduation (e.g., residency training) to achieve competency in providing direct patient care. Tier 3 topics may not be taught in the professional curriculum; thus, graduates will be required to obtain the necessary knowledge and skills on their own to provide direct patient care, if required in their practice. The 2016 toolkit contains 276 diseases and content topics, of which 87 (32%) are categorized as tier 1, 133 (48%) as tier 2, and 56 (20%) as tier 3. The large number of tier 1 topics will require schools and colleges to use creative pedagogical strategies to achieve the necessary practice competencies. Almost half of the topics (48%) are tier 2, highlighting the importance of postgraduate residency training or equivalent practice experience to competently care for patients with these disorders. The Pharmacotherapy Didactic Curriculum Toolkit will continue to be updated to provide guidance to faculty at schools and colleges of pharmacy as these academic pharmacy institutions regularly evaluate and modify their curricula to keep abreast of scientific advances and associated practice changes. Access the current Pharmacotherapy Didactic Curriculum Toolkit at http://www.accp.com/docs/positions/misc/Toolkit_final.pdf. © 2016 Pharmacotherapy Publications, Inc.

  12. 76 FR 60022 - Endocrine Disruptor Screening Program; Weight-of-Evidence Guidance Document; Notice of Availability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-28

    ... evaluate results from the battery of Tier 1 screening assays along with other scientific and technical... guidance document that requested additional detail and clarification of the following: 1. Tier 1 battery of...

  13. Human breast adipose tissue: characterization of factors that change during tumor progression in human breast cancer.

    PubMed

    Fletcher, Sabrina Johanna; Sacca, Paula Alejandra; Pistone-Creydt, Mercedes; Coló, Federico Andrés; Serra, María Florencia; Santino, Flavia Eliana; Sasso, Corina Verónica; Lopez-Fontana, Constanza Matilde; Carón, Rubén Walter; Calvo, Juan Carlos; Pistone-Creydt, Virginia

    2017-02-07

    Adipose microenvironment is involved in signaling pathways that influence breast cancer. We aim to characterize factors that are modified: 1) in tumor and non tumor human breast epithelial cell lines when incubated with conditioned media (CMs) from human breast cancer adipose tissue explants (hATT) or normal breast adipose tissue explants (hATN); 2) in hATN-CMs vs hATT-CMs; 3) in the tumor associated adipocytes vs. non tumor associated adipocytes. We used hATN or hATT- CMs on tumor and non-tumor breast cancer cell lines. We evaluated changes in versican, CD44, ADAMTS1 and Adipo R1 expression on cell lines or in the different CMs. In addition we evaluated changes in the morphology and expression of these factors in slices of the different adipose tissues. The statistical significance between different experimental conditions was evaluated by one-way ANOVA. Tukey's post-hoc tests were performed within each individual treatment. hATT-CMs increase versican, CD44, ADAMTS1 and Adipo R1 expression in breast cancer epithelial cells. Furthermore, hATT-CMs present higher levels of versican expression compared to hATN-CMs. In addition, we observed a loss of effect in cellular migration when we pre-incubated hATT-CMs with chondroitinase ABC, which cleaves GAGs chains bound to the versican core protein, thus losing the ability to bind to CD44. Adipocytes associated with the invasive front are reduced in size compared to adipocytes that are farther away. Also, hATT adipocytes express significantly higher amounts of versican, CD44 and Adipo R1, and significantly lower amounts of adiponectin and perilipin, unlike hATN adipocytes. We conclude that hATT secrete a different set of proteins compared to hATN. Furthermore, versican, a proteoglycan that is overexpressed in hATT-CMs compared to hATN-CMs, might be involved in the tumorogenic behavior observed in both cell lines employed. In addition, we may conclude that adipocytes from the tumor microenvironment show a less differentiated state than adipocytes from normal microenvironment. This would indicate a loss of normal functions in mature adipocytes (such as energy storage), in support of others that might favor tumor growth.

  14. Querying and Computing with BioCyc Databases

    PubMed Central

    Krummenacker, Markus; Paley, Suzanne; Mueller, Lukas; Yan, Thomas; Karp, Peter D.

    2006-01-01

    Summary We describe multiple methods for accessing and querying the complex and integrated cellular data in the BioCyc family of databases: access through multiple file formats, access through Application Program Interfaces (APIs) for LISP, Perl and Java, and SQL access through the BioWarehouse relational database. Availability The Pathway Tools software and 20 BioCyc DBs in Tiers 1 and 2 are freely available to academic users; fees apply to some types of commercial use. For download instructions see http://BioCyc.org/download.shtml PMID:15961440

  15. Simulation of strong ground motion parameters of the 1 June 2013 Gulf of Suez earthquake, Egypt

    NASA Astrophysics Data System (ADS)

    Toni, Mostafa

    2017-06-01

    This article aims to simulate the ground motion parameters of the moderate magnitude (ML 5.1) June 1, 2013 Gulf of Suez earthquake, which represents the largest instrumental earthquake to be recorded in the middle part of the Gulf of Suez up to now. This event was felt in all cities located on both sides of the Gulf of Suez, with minor damage to property near the epicenter; however, no casualties were observed. The stochastic technique with the site-dependent spectral model is used to simulate the strong ground motion parameters of this earthquake in the cities located at the western side of the Gulf of Suez and north Red Sea namely: Suez, Ain Sokhna, Zafarana, Ras Gharib, and Hurghada. The presence of many tourist resorts and the increase in land use planning in the considered cities represent the motivation of the current study. The simulated parameters comprise the Peak Ground Acceleration (PGA), Peak Ground Velocity (PGV), and Peak Ground Displacement (PGD), in addition to Pseudo Spectral Acceleration (PSA). The model developed for ground motion simulation is validated by using the recordings of three accelerographs installed around the epicenter of the investigated earthquake. Depending on the site effect that has been determined in the investigated areas by using geotechnical data (e.g., shear wave velocities and microtremor recordings), the investigated areas are classified into two zones (A and B). Zone A is characterized by higher site amplification than Zone B. The ground motion parameters are simulated at each zone in the considered areas. The results reveal that the highest values of PGA, PGV, and PGD are observed at Ras Gharib city (epicentral distance ∼ 11 km) as 67 cm/s2, 2.53 cm/s, and 0.45 cm respectively for Zone A, and as 26.5 cm/s2, 1.0 cm/s, and 0.2 cm respectively for Zone B, while the lowest values of PGA, PGV, and PGD are observed at Suez city (epicentral distance ∼ 190 km) as 3.0 cm/s2, 0.2 cm/s, and 0.05 cm/s respectively for Zone A, and as 1.3 cm/s2, 0.1 cm/s, and 0.024 cm respectively for Zone B. Also the highest PSA values are observed in Ras Gharib city as 200 cm/s2 and 78 cm/s2 for Zone A and Zone B respectively, while the lowest PSA values are observed in Suez city as 7 cm/s2 and 3 cm/s2 for Zone A and Zone B respectively. These results show a good agreement with the earthquake magnitude, epicentral distances, and site characterizations as well.

  16. Touch Satiety: Differential Effects of Stroking Velocity on Liking and Wanting Touch Over Repetitions

    PubMed Central

    Triscoli, Chantal; Ackerley, Rochelle; Sailer, Uta

    2014-01-01

    A slow, gentle caress of the skin is a salient hedonic stimulus. Low threshold, unmyelinated C-tactile afferents fire preferentially to this type of touch, where slow (<1 cm/s) and fast (>10 cm/s) stroking velocities produce lower firing frequencies and are rated as less pleasant. The current aim was to investigate how the experience of tactile pleasantness changes with repeated exposure (satiety to touch). A further aim was to determine whether tactile satiety varied with different stroking velocities. The experimental paradigm used a controlled brush stroke to the forearm that was delivered repeatedly for ∼50 minutes. In Experiment 1, brush strokes were administered at three different velocities (0.3 cm/s, 3 cm/s and 30 cm/s), which were presented in a pseudo-randomised order. In Experiment 2, brush strokes were applied using only one velocity (either 3 or 30 cm/s). After each stroke, the participants rated both subjective pleasantness (liking) and wanting (the wish to be further exposed to the same stimulus) for each tactile sensation. In Experiment 1, both pleasantness and wanting showed a small, but significant, decrease over repetitions during stroking at 3 cm/s only, where the mean values for pleasantness and wanting were similar. Conversely, slower (0.3 cm/s) and faster (30 cm/s) stroking showed no decrease in ratings over time, however pleasantness was rated higher than wanting. In Experiment 2, both pleasantness and wanting showed a significant decrease over repetitions for both applied velocities, with a larger decrease in ratings for stroking at 3 cm/s. In conclusion, satiety to touch occurred with a slow onset and progression, where pleasantness and wanting ratings to stroking at 3 cm/s were affected more than at the slower or faster velocities. Tactile satiety appears to differ compared to appetitive and olfactory satiety, because the hedonic and rewarding aspects of touch persist for some time. PMID:25405620

  17. Recalibration of the earthworm tier 1 risk assessment of plant protection products.

    PubMed

    Christl, Heino; Bendall, Julie; Bergtold, Matthias; Coulson, Mike; Dinter, Axel; Garlej, Barbara; Hammel, Klaus; Kabouw, Patrick; Sharples, Amanda; von Mérey, Georg; Vrbka, Silvie; Ernst, Gregor

    2016-10-01

    In the first step of earthworm risk assessment for plant protection products (PPPs), the risk is assessed by comparing the no-observed effect levels (NOELs) from laboratory reproduction tests with the predicted exposure of the PPP in soil, while applying a trigger value (assessment factor [AF]) to cover uncertainties. If this step indicates a potential risk, field studies are conducted. However, the predicted environmental concentration in soil, which can be calculated, for example, for different soil layers (ranging from 0-1 cm to 0-20 cm), and the AF determine the conservatism that is applied in this first step. In this review paper, the tier 1 earthworm risk assessment for PPPs is calibrated by comparing the NOEL in earthworm reproduction tests with effect levels on earthworm populations under realistic field conditions. A data set of 54 pairs of studies conducted in the laboratory and in the field with the same PPP was compiled, allowing a direct comparison of relevant endpoints. The results indicate that a tier 1 AF of 5 combined with a regulatory relevant soil layer of 0 to 5 cm provides a conservative tier 1 risk assessment. A risk was identified by the tier 1 risk assessment in the majority of the cases at application rates that were of low risk for natural earthworm populations under field conditions. Increasing the conservatism in the tier 1 risk assessment by reducing the depth of the regulatory relevant soil layer or by increasing the tier 1 AF would increase the number of false positives and trigger a large number of additional field studies. This increased conservatism, however, would not increase the margin of safety for earthworm populations. The analysis revealed that the risk assessment is conservative if an AF of 5 and a regulatory relevant soil layer of 0 to 5 cm is used. Integr Environ Assess Manag 2016;12:643-650. © 2015 SETAC. © 2015 SETAC.

  18. 12 CFR 1500.5 - What aggregate thresholds apply to merchant banking investments?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... investments held by the financial holding company under this part exceeds: (1) 30 percent of the Tier 1... percent of the Tier 1 capital of the financial holding company (b) How do these thresholds apply to a...

  19. The successively temporal error concealment algorithm using error-adaptive block matching principle

    NASA Astrophysics Data System (ADS)

    Lee, Yu-Hsuan; Wu, Tsai-Hsing; Chen, Chao-Chyun

    2014-09-01

    Generally, the temporal error concealment (TEC) adopts the blocks around the corrupted block (CB) as the search pattern to find the best-match block in previous frame. Once the CB is recovered, it is referred to as the recovered block (RB). Although RB can be the search pattern to find the best-match block of another CB, RB is not the same as its original block (OB). The error between the RB and its OB limits the performance of TEC. The successively temporal error concealment (STEC) algorithm is proposed to alleviate this error. The STEC procedure consists of tier-1 and tier-2. The tier-1 divides a corrupted macroblock into four corrupted 8 × 8 blocks and generates a recovering order for them. The corrupted 8 × 8 block with the first place of recovering order is recovered in tier-1, and remaining 8 × 8 CBs are recovered in tier-2 along the recovering order. In tier-2, the error-adaptive block matching principle (EA-BMP) is proposed for the RB as the search pattern to recover remaining corrupted 8 × 8 blocks. The proposed STEC outperforms sophisticated TEC algorithms on average PSNR by 0.3 dB on the packet error rate of 20% at least.

  20. 34 CFR 85.435 - What must I require of a primary tier participant?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 1 2011-07-01 2011-07-01 false What must I require of a primary tier participant? 85.435 Section 85.435 Education Office of the Secretary, Department of Education GOVERNMENTWIDE DEBARMENT... must I require of a primary tier participant? You as an agency official must require each participant...

  1. 34 CFR 85.435 - What must I require of a primary tier participant?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false What must I require of a primary tier participant? 85.435 Section 85.435 Education Office of the Secretary, Department of Education GOVERNMENTWIDE DEBARMENT... must I require of a primary tier participant? You as an agency official must require each participant...

  2. Differentiating by Readiness: Strategies and Lesson Plans for Tiered Instruction, Grades K-8

    ERIC Educational Resources Information Center

    Turville, Joni; Allen, Linda; Nickelsen, LeAnn

    2010-01-01

    This book provides a comprehensive introduction to tiering plus step-by-step instructions for using it in your classroom. Also included are 23 ready-to-apply blackline masters, which provide helpful ideas for activities and classroom management. Contents include: (1) Building the foundation: What is tiering in differentiated instruction?; (2) The…

  3. Ada (Tradename) Compiler Validation Summary Report. International Business Machines Corporation. IBM Development System for the Ada Language for VM/CMS, Version 1.0. IBM 4381 (IBM System/370) under VM/CMS.

    DTIC Science & Technology

    1986-04-29

    COMPILER VALIDATION SUMMARY REPORT: International Business Machines Corporation IBM Development System for the Ada Language for VM/CMS, Version 1.0 IBM 4381...tested using command scripts provided by International Business Machines Corporation. These scripts were reviewed by the validation team. Test.s were run...s): IBM 4381 (System/370) Operating System: VM/CMS, release 3.6 International Business Machines Corporation has made no deliberate extensions to the

  4. Cardiovascular Toxicity of Illicit Anabolic-Androgenic Steroid Use

    PubMed Central

    Baggish, Aaron L.; Weiner, Rory B.; Kanayama, Gen; Hudson, James I.; Lu, Michael T.; Hoffmann, Udo; Pope, Harrison G.

    2017-01-01

    Background Millions of individuals have used illicit anabolic-androgenic steroids (AAS), but the long-term cardiovascular associations of these drugs remains incompletely understood. Methods Employing a cross-sectional cohort design, we recruited 140 experienced male weightlifters aged 34–54 years, comprising 86 men reporting at least 2 years of cumulative lifetime AAS use and 54 non-using men. Using transthoracic echocardiography and coronary computed tomography angiography, we assessed 3 primary outcome measures: left ventricular (LV) systolic function (left ventricular ejection fraction [LVEF]), LV diastolic function (early relaxation velocity [E´]), and coronary atherosclerosis (coronary artery plaque volume). Results Compared to non-users, AAS users demonstrated relatively reduced LV systolic function (mean±SD LVEF = 52±11% vs. 63±8%; P<0.001) and diastolic function (E´ = 9.3±2.4 cm/s vs. 11.1±2.0 cm/s; P<0.001). Users currently taking AAS at the time of evaluation (N = 58) showed significantly reduced LV systolic (LVEF = 49±10% vs. 58±10%; P<0.001) and diastolic function (E´ = 8.9±2.4 cm/s vs. 10.1±2.4 cm/s; P=0.035) compared to users currently off-drug (N = 28). Additionally, AAS users demonstrated higher coronary artery plaque volume then nonusers (median [interquartile range] 3 [0, 174] mL3 vs. 0 [0, 69] mL3, P = 0.012). Lifetime AAS dose was strongly associated with coronary atherosclerotic burden (increase [95% confidence interval] in rank of plaque volume for each 10-year increase in cumulative duration of AAS use: 0.60 SD units [0.16 to 1.03 SD units]; P = 0.008). Conclusions Long-term AAS use appears to be associated with myocardial dysfunction and accelerated coronary atherosclerosis. These forms of AAS-associated adverse cardiovascular phenotypes may represent a previously under-recognized public-health problem. PMID:28533317

  5. The effect of incentive-based formularies on prescription-drug utilization and spending.

    PubMed

    Huskamp, Haiden A; Deverka, Patricia A; Epstein, Arnold M; Epstein, Robert S; McGuigan, Kimberly A; Frank, Richard G

    2003-12-04

    Many employers and health plans have adopted incentive-based formularies in an attempt to control prescription-drug costs. We used claims data to compare the utilization of and spending on drugs in two employer-sponsored health plans that implemented changes in formulary administration with those in comparison groups of enrollees covered by the same insurers. One plan simultaneously switched from a one-tier to a three-tier formulary and increased all enrollee copayments for medications. The second switched from a two-tier to a three-tier formulary, changing only the copayments for tier-3 drugs. We examined the utilization of angiotensin-converting-enzyme (ACE) inhibitors, proton-pump inhibitors, and 3-hydroxy-3-methylglutaryl coenzyme A reductase inhibitors (statins). Enrollees covered by the employer that implemented more dramatic changes experienced slower growth than the comparison group in the probability of the use of a drug and a major shift in spending from the plan to the enrollee. Among the enrollees who were initially taking tier-3 statins, more enrollees in the intervention group than in the comparison group switched to tier-1 or tier-2 medications (49 percent vs. 17 percent, P<0.001) or stopped taking statins entirely (21 percent vs. 11 percent, P=0.04). Patterns were similar for ACE inhibitors and proton-pump inhibitors. The enrollees covered by the employer that implemented more moderate changes were more likely than the comparison enrollees to switch to tier-1 or tier-2 medications but not to stop taking a given class of medications altogether. Different changes in formulary administration may have dramatically different effects on utilization and spending and may in some instances lead enrollees to discontinue therapy. The associated changes in copayments can substantially alter out-of-pocket spending by enrollees, the continuation of the use of medications, and possibly the quality of care. Copyright 2003 Massachusetts Medical Society

  6. Improved Serodiagnostic Performance for Lyme Disease by Use of Two Recombinant Proteins in Enzyme-Linked Immunosorbent Assay Compared to Standardized Two-Tier Testing.

    PubMed

    Bradshaw, Gary L; Thueson, R Kelley; Uriona, Todd J

    2017-10-01

    The most reliable test method for the serological confirmation of Lyme disease (LD) is a 2-tier method recommended by the CDC in 1995. The first-tier test is a low-specificity enzyme-linked immunosorbent assay (ELISA), and the second-tier tests are higher-specificity IgG and IgM Western blots. This study describes the selection of two Borrelia burgdorferi recombinant proteins and evaluation of their performance in a simple 1-tier test for the serological confirmation of LD. These two proteins were generated from (i) the full-length dbpA gene combined with the invariable region 6 of the vlsE gene (DbpA/C6) and (b) the full-length ospC gene (OspC). The expressed DbpA/C6 and OspC proteins were useful in detecting anti- Borrelia IgG and IgM antibodies, respectively. A blind study was conducted on a well-characterized panel of 279 human sera from the CDC, comparing ELISAs using these two recombinant antigens with the 2-tier test method. The two methods (DbpA/C6-OspC versus 2-tier test) were equivalent in identifying sera from negative-control subjects (99% and 100% specificity, respectively) and in detecting stage II and III LD patient sera (100% and 100% sensitivity). However, the DbpA/C6-OspC ELISA was markedly better (80% versus 63%) than the 2-tier test method in detecting anti- Borrelia antibodies in stage I LD patients. The findings suggest that these antigens could be used in a simple 1-tier ELISA that is faster to perform, easier to interpret, and less expensive than the 2-tier test method and which is better at detecting Borrelia -specific antibodies in sera from patients with stage I LD. Copyright © 2017 Bradshaw et al.

  7. Anatomy of a Security Operations Center

    NASA Technical Reports Server (NTRS)

    Wang, John

    2010-01-01

    Many agencies and corporations are either contemplating or in the process of building a cyber Security Operations Center (SOC). Those Agencies that have established SOCs are most likely working on major revisions or enhancements to existing capabilities. As principle developers of the NASA SOC; this Presenters' goals are to provide the GFIRST community with examples of some of the key building blocks of an Agency scale cyber Security Operations Center. This presentation viII include the inputs and outputs, the facilities or shell, as well as the internal components and the processes necessary to maintain the SOC's subsistence - in other words, the anatomy of a SOC. Details to be presented include the SOC architecture and its key components: Tier 1 Call Center, data entry, and incident triage; Tier 2 monitoring, incident handling and tracking; Tier 3 computer forensics, malware analysis, and reverse engineering; Incident Management System; Threat Management System; SOC Portal; Log Aggregation and Security Incident Management (SIM) systems; flow monitoring; IDS; etc. Specific processes and methodologies discussed include Incident States and associated Work Elements; the Incident Management Workflow Process; Cyber Threat Risk Assessment methodology; and Incident Taxonomy. The Evolution of the Cyber Security Operations Center viII be discussed; starting from reactive, to proactive, and finally to proactive. Finally, the resources necessary to establish an Agency scale SOC as well as the lessons learned in the process of standing up a SOC viII be presented.

  8. The effect of a three-tier formulary on antidepressant utilization and expenditures.

    PubMed

    Hodgkin, Dominic; Parks Thomas, Cindy; Simoni-Wastila, Linda; Ritter, Grant A; Lee, Sue

    2008-06-01

    Health plans in the United States are struggling to contain rapid growth in their spending on medications. They have responded by implementing multi-tiered formularies, which label certain brand medications 'non-preferred' and require higher patient copayments for those medications. This multi-tier policy relies on patients' willingness to switch medications in response to copayment differentials. The antidepressant class has certain characteristics that may pose problems for implementation of three-tier formularies, such as differences in which medication works for which patient, and high rates of medication discontinuation. To measure the effect of a three-tier formulary on antidepressant utilization and spending, including decomposing spending allocations between patient and plan. We use claims and eligibility files for a large, mature nonprofit managed care organization that started introducing its three-tier formulary on January 1, 2000, with a staggered implementation across employer groups. The sample includes 109,686 individuals who were continuously enrolled members during the study period. We use a pretest-posttest quasi-experimental design that includes a comparison group, comprising members whose employer had not adopted three-tier as of March 1, 2000. This permits some control for potentially confounding changes that could have coincided with three-tier implementation. For the antidepressants that became nonpreferred, prescriptions per enrollee decreased 11% in the three-tier group and increased 5% in the comparison group. The own-copay elasticity of demand for nonpreferred drugs can be approximated as -0.11. Difference-in-differences regression finds that the three-tier formulary slowed the growth in the probability of using antidepressants in the post-period, which was 0.3 percentage points lower than it would have been without three-tier. The three-tier formulary also increased out-of-pocket payments while reducing plan payments and total spending. The results indicate that the plan enrollees were somewhat responsive to the changed incentives, shifting away from the drugs that became nonpreferred. However, the intervention also resulted in cost-shifting from plan to enrollees, indicating some price-inelasticity. The reduction in the proportion of enrollees filling any prescriptions contrasts with results of prior studies for non-psychotropic drug classes. Limitations include the possibility of confounding changes coinciding with three-tier implementation (if they affected the two groups differentially); restriction to continuous enrollees; and lack of data on rebates the plan paid to drug manufacturers. The results of this study suggest that the impact of the three-tier formulary approach may be somewhat different for antidepressants than for some other classes. Policymakers should monitor the effects of three-tier programs on utilization in psychotropic medication classes. Future studies should seek to understand the reasons for patients' limited response to the change in incentives, perhaps using physician and/or patient surveys. Studies should also examine the effects of three-tier programs on patient adherence, quality of care, and clinical and economic outcomes.

  9. Intelligent Systems and Advanced User Interfaces for Design, Operation, and Maintenance of Command Management Systems

    NASA Technical Reports Server (NTRS)

    Mitchell, Christine M.

    1998-01-01

    Historically Command Management Systems (CMS) have been large, expensive, spacecraft-specific software systems that were costly to build, operate, and maintain. Current and emerging hardware, software, and user interface technologies may offer an opportunity to facilitate the initial formulation and design of a spacecraft-specific CMS as well as a to develop a more generic or a set of core components for CMS systems. Current MOC (mission operations center) hardware and software include Unix workstations, the C/C++ and Java programming languages, and X and Java window interfaces representations. This configuration provides the power and flexibility to support sophisticated systems and intelligent user interfaces that exploit state-of-the-art technologies in human-machine systems engineering, decision making, artificial intelligence, and software engineering. One of the goals of this research is to explore the extent to which technologies developed in the research laboratory can be productively applied in a complex system such as spacecraft command management. Initial examination of some of the issues in CMS design and operation suggests that application of technologies such as intelligent planning, case-based reasoning, design and analysis tools from a human-machine systems engineering point of view (e.g., operator and designer models) and human-computer interaction tools, (e.g., graphics, visualization, and animation), may provide significant savings in the design, operation, and maintenance of a spacecraft-specific CMS as well as continuity for CMS design and development across spacecraft with varying needs. The savings in this case is in software reuse at all stages of the software engineering process.

  10. Blockade of AT1 type receptors for angiotensin II prevents cardiac microvascular fibrosis induced by chronic stress in Sprague-Dawley rats.

    PubMed

    Firoozmand, Lília Taddeo; Sanches, Andrea; Damaceno-Rodrigues, Nilsa Regina; Perez, Juliana Dinéia; Aragão, Danielle Sanches; Rosa, Rodolfo Mattar; Marcondes, Fernanda Klein; Casarini, Dulce Elena; Caldini, Elia Garcia; Cunha, Tatiana Sousa

    2018-04-20

    To test the effects of chronic-stress on the cardiovascular system, the model of chronic mild unpredictable stress (CMS) has been widely used. The CMS protocol consists of the random, intermittent, and unpredictable exposure of laboratory animals to a variety of stressors, during 3 consecutive weeks. In this study, we tested the hypothesis that exposure to the CMS protocol leads to left ventricle microcirculatory remodeling that can be attenuated by angiotensin II receptor blockade. Male Sprague-Dawley rats were randomly assigned into four groups: Control, Stress, Control + losartan, and Stress + losartan (N = 6, each group, losartan: 20 mg/kg/day). The rats were euthanized 15 days after CMS exposure, and blood samples and left ventricle were collected. Rats submitted to CMS presented increased glycemia, corticosterone, noradrenaline and adrenaline concentration, and losartan reduced the concentration of the circulating amines. Cardiac angiotensin II, measured by high-performance liquid chromatography (HPLC), was significantly increased in the CMS group, and losartan treatment reduced it, while angiotensin 1-7 was significantly higher in the CMS losartan-treated group as compared with CMS. Histological analysis, verified by transmission electron microscopy, showed that rats exposed to CMS presented increased perivascular collagen and losartan effectively prevented the development of this process. Hence, CMS induced a state of microvascular disease, with increased perivascular collagen deposition, that may be the trigger for further development of cardiovascular disease. In this case, CMS fibrosis is associated with increased production of catecholamines and with a disruption of renin-angiotensin system balance, which can be prevented by angiotensin II receptor blockade.

  11. Fast emulation of track reconstruction in the CMS simulation

    NASA Astrophysics Data System (ADS)

    Komm, Matthias; CMS Collaboration

    2017-10-01

    Simulated samples of various physics processes are a key ingredient within analyses to unlock the physics behind LHC collision data. Samples with more and more statistics are required to keep up with the increasing amounts of recorded data. During sample generation, significant computing time is spent on the reconstruction of charged particle tracks from energy deposits which additionally scales with the pileup conditions. In CMS, the FastSimulation package is developed for providing a fast alternative to the standard simulation and reconstruction workflow. It employs various techniques to emulate track reconstruction effects in particle collision events. Several analysis groups in CMS are utilizing the package, in particular those requiring many samples to scan the parameter space of physics models (e.g. SUSY) or for the purpose of estimating systematic uncertainties. The strategies for and recent developments in this emulation are presented, including a novel, flexible implementation of tracking emulation while retaining a sufficient, tuneable accuracy.

  12. Interim evaluation of the Tier 1 Program of Project P.A.T.H.S.: continuation of evidence.

    PubMed

    Shek, Daniel T L; Yu, Lu; Chan, Alex C W

    2012-01-17

    An interim evaluation study was conducted to understand the implementation of the Tier 1 Program of Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in the 2008/09 school year. One hundred and twenty-eight schools were randomly selected to provide information on the implementation details of the program via interviews, telephone interviews and self-completed questionnaires. Results showed that a majority of the workers perceived that the students had positive responses to the program and the program was helpful to the students. Program workers' views toward the implementation of the Tier 1 Program were positive across different grades and program implementation modes. In conjunction with previous studies, the present findings suggest that the Tier 1 Program of Project P.A.T.H.S. is well received by different stakeholders.

  13. Perceptions, use and attitudes of pharmacy customers on complementary medicines and pharmacy practice.

    PubMed

    Braun, Lesley A; Tiralongo, Evelin; Wilkinson, Jenny M; Spitzer, Ondine; Bailey, Michael; Poole, Susan; Dooley, Michael

    2010-07-20

    Complementary medicines (CMs) are popular amongst Australians and community pharmacy is a major supplier of these products. This study explores pharmacy customer use, attitudes and perceptions of complementary medicines, and their expectations of pharmacists as they relate to these products. Pharmacy customers randomly selected from sixty large and small, metropolitan and rural pharmacies in three Australian states completed an anonymous, self administered questionnaire that had been pre-tested and validated. 1,121 customers participated (response rate 62%). 72% had used CMs within the previous 12 months, 61% used prescription medicines daily and 43% had used both concomitantly. Multivitamins, fish oils, vitamin C, glucosamine and probiotics were the five most popular CMs. 72% of people using CMs rated their products as 'very effective' or 'effective enough'. CMs were as frequently used by customers aged 60 years or older as younger customers (69% vs. 72%) although the pattern of use shifted with older age. Most customers (92%) thought pharmacists should provide safety information about CMs, 90% thought they should routinely check for interactions, 87% thought they should recommend effective CMs, 78% thought CMs should be recorded in customer's medication profile and 58% thought pharmacies stocking CMs should also employ a complementary medicine practitioner. Of those using CMs, 93% thought it important for pharmacists to be knowledgeable about CMs and 48% felt their pharmacist provides useful information about CMs. CMs are widely used by pharmacy customers of all ages who want pharmacists to be more involved in providing advice about these products.

  14. Subchronic safety evaluation of CMS-1 (a botanical antihypertensive product derived from Semen Cnidium monnieri) in Sprague-Dawley rats and beagle dogs.

    PubMed

    Gong, Xue-Lian; Gao, Ting-Ting; Zhao, Li-Jun; Zhu, Hai; Xia, Zhen-Na; Lu, Wen; Lu, Guo-Cai

    2014-08-01

    CMS-1, mainly composed of imperatorin as its active compound, is a partially purified fraction of a Chinese herbal medicine, Semen Cnidium monnieri. CMS-1 has the potential to be further developed as a new treatment for hypertension. Thus, we studied its toxicity in both Sprague-Dawley rats and beagle dogs. Rats (0-900mg/kg/day) and dogs (0-450mg/kg/day) received CMS-1 orally for 30 consecutive days, followed by a 15-day recovery period. The major target organs of CMS-1 toxicity are the GI (inappetence), liver (hepatocellular necrosis, enzyme elevation), thymus (atrophy), cardiovascular (hypotension), changes in ECG T and P waveforms, elevation of nitrous oxide levels and hematological (RBC parameters disturbances) systems. Most treatment-induced adverse effects were reversible or showed a progressive recovery upon discontinuation of the treatment. The No Observed Adverse Effect Level (NOAEL) was 100mg/kg/day for rats and 50mg/kg/day for dogs. This non-clinical study suggests that clinical monitoring of CMS-1 in patients should focus on the gastrointestinal system, blood tests for liver functions, electrolytes, and blood homeostasis, cardiovascular functions, and immune functions. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Energy Frontier Research With ATLAS: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, John; Black, Kevin; Ahlen, Steve

    2016-06-14

    The Boston University (BU) group is playing key roles across the ATLAS experiment: in detector operations, the online trigger, the upgrade, computing, and physics analysis. Our team has been critical to the maintenance and operations of the muon system since its installation. During Run 1 we led the muon trigger group and that responsibility continues into Run 2. BU maintains and operates the ATLAS Northeast Tier 2 computing center. We are actively engaged in the analysis of ATLAS data from Run 1 and Run 2. Physics analyses we have contributed to include Standard Model measurements (W and Z cross sections,more » t\\bar{t} differential cross sections, WWW^* production), evidence for the Higgs decaying to \\tau^+\\tau^-, and searches for new phenomena (technicolor, Z' and W', vector-like quarks, dark matter).« less

  16. Introduction of conditional mean spectrum and conditional spectrum in the practice of seismic safety evaluation in China

    NASA Astrophysics Data System (ADS)

    Ji, Kun; Bouaanani, Najib; Wen, Ruizhi; Ren, Yefei

    2018-05-01

    This paper aims at implementing and introducing the use of conditional mean spectrum (CMS) and conditional spectrum (CS) as the main input parameters in the practice of seismic safety evaluation (SSE) in China, instead of the currently used uniform hazard spectrum (UHS). For this purpose, a procedure for M-R-epsilon seismic hazard deaggregation in China was first developed. For illustration purposes, two different typical sites in China, with one to two dominant seismic zones, were considered as examples to carry out seismic hazard deaggregation and illustrate the construction of CMS/CS. Two types of correlation coefficients were used to generate CMS and the results were compared over a vibration period range of interest. Ground motion records were selected from the NSMONS (2007-2015) and PEER NGA-West2 databases to correspond to the target CMS and CS. Hazard consistency of the spectral accelerations of the selected ground motion records was evaluated and validated by computing the annual exceedance probability rate of the response spectra and comparing the results to the hazard curve corresponding to each site of concern at different periods. The tools developed in this work and their illustrative application to specific case studies in China are a first step towards the adoption of CMS and CS into the practice of seismic safety evaluation in this country.

  17. CON4EI: Development of testing strategies for hazard identification and labelling for serious eye damage and eye irritation of chemicals.

    PubMed

    Adriaens, E; Verstraelen, S; Alépée, N; Kandarova, H; Drzewiecka, A; Gruszka, K; Guest, R; Willoughby, J A; Van Rompay, A R

    2018-06-01

    Assessment of acute eye irritation potential is part of the international regulatory requirements for safety testing of chemicals. In the last decades, many efforts have been made in the search for alternative methods to replace the regulatory in vivo Draize rabbit eye test (OECD TG 405). Success in terms of complete replacement of the regulatory in vivo Draize rabbit eye test has not yet been achieved. The main objective of the CEFIC-LRI-AIMT6-VITO CON4EI (CONsortium for in vitro Eye Irritation testing strategy) project was to develop tiered testing strategies for serious eye damage and eye irritation assessment that can lead to complete replacement of OECD TG 405. A set of 80 reference chemicals (e.g. balanced by important driver of classification and physical state), was tested with seven test methods. Based on the results of this project, three different strategies were suggested. We have provided a standalone (EpiOcular ET-50), a two-tiered and three-tiered strategy, that can be used to distinguish between Cat 1 and Cat 2 chemicals and chemicals that do not require classification (No Cat). The two-tiered and three-tiered strategies use an RhCE test method (EpiOcular EIT or SkinEthic™ EIT) at the bottom (identification No Cat) in combination with the BCOP LLBO (two-tiered strategy) or BCOP OP-KIT and SMI (three-tiered strategy) at the top (identification Cat 1). For our proposed strategies, 71.1% - 82.9% Cat 1, 64.2% - 68.5% Cat 2 and ≥80% No Cat chemicals were correctly identified. Also, similar results were obtained for the Top-Down and Bottom-Up approach. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. A histological ontology of the human cardiovascular system.

    PubMed

    Mazo, Claudia; Salazar, Liliana; Corcho, Oscar; Trujillo, Maria; Alegre, Enrique

    2017-10-02

    In this paper, we describe a histological ontology of the human cardiovascular system developed in collaboration among histology experts and computer scientists. The histological ontology is developed following an existing methodology using Conceptual Models (CMs) and validated using OOPS!, expert evaluation with CMs, and how accurately the ontology can answer the Competency Questions (CQ). It is publicly available at http://bioportal.bioontology.org/ontologies/HO and https://w3id.org/def/System . The histological ontology is developed to support complex tasks, such as supporting teaching activities, medical practices, and bio-medical research or having natural language interactions.

  19. Using latent class analysis to identify academic and behavioral risk status in elementary students.

    PubMed

    King, Kathleen R; Lembke, Erica S; Reinke, Wendy M

    2016-03-01

    Identifying classes of children on the basis of academic and behavior risk may have important implications for the allocation of intervention resources within Response to Intervention (RTI) and Multi-Tiered System of Support (MTSS) models. Latent class analysis (LCA) was conducted with a sample of 517 third grade students. Fall screening scores in the areas of reading, mathematics, and behavior were used as indicators of success on an end of year statewide achievement test. Results identified 3 subclasses of children, including a class with minimal academic and behavioral concerns (Tier 1; 32% of the sample), a class at-risk for academic problems and somewhat at-risk for behavior problems (Tier 2; 37% of the sample), and a class with significant academic and behavior problems (Tier 3; 31%). Each class was predictive of end of year performance on the statewide achievement test, with the Tier 1 class performing significantly higher on the test than the Tier 2 class, which in turn scored significantly higher than the Tier 3 class. The results of this study indicated that distinct classes of children can be determined through brief screening measures and are predictive of later academic success. Further implications are discussed for prevention and intervention for students at risk for academic failure and behavior problems. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. Assessing the Nutritional Quality of Diets of Canadian Adults Using the 2014 Health Canada Surveillance Tool Tier System

    PubMed Central

    Jessri, Mahsa; Nishi, Stephanie K.; L’Abbé, Mary R.

    2015-01-01

    The 2014 Health Canada Surveillance Tool (HCST) was developed to assess adherence of dietary intakes with Canada’s Food Guide. HCST classifies foods into one of four Tiers based on thresholds for sodium, total fat, saturated fat and sugar, with Tier 1 representing the healthiest and Tier 4 foods being the unhealthiest. This study presents the first application of HCST to assess (a) dietary patterns of Canadians; and (b) applicability of this tool as a measure of diet quality among 19,912 adult participants of Canadian Community Health Survey 2.2. Findings indicated that even though most of processed meats and potatoes were Tier 4, the majority of reported foods in general were categorized as Tiers 2 and 3 due to the adjustable lenient criteria used in HCST. Moving from the 1st to the 4th quartile of Tier 4 and “other” foods/beverages, there was a significant trend towards increased calories (1876 kcal vs. 2290 kcal) and “harmful” nutrients (e.g., sodium) as well as decreased “beneficial” nutrients. Compliance with the HCST was not associated with lower body mass index. Future nutrient profiling systems need to incorporate both “positive” and “negative” nutrients, an overall score and a wider range of nutrient thresholds to better capture food product differences. PMID:26703721

Top