SlideShare a Scribd company logo
1 of 5
Download to read offline
2009.10.29.

ALICE Data Acquisition

ALICE Data Acquisition
The Large Hadron Collider (LHC) will make protons or ions collide not only at a much higher energy but also
at a much larger rate than ever before. To digest the resulting wealth of information, the four LHC
experiments have to push data handling technology well beyond the current state-of-the-art, be it in trigger
rates, data acquisition bandwidth or data archive. ALICE, the experiment dedicated to the study of nucleusnucleus collisions, had to design a data acquisition system that operates efficiently in two widely different
running modes: the very frequent but small events, with few produced particles encountered in the pp mode,
and the relatively rare, but extremely large events, with tens of thousands of new particles produced in ion
operation (L = 1027 cm-2 s -1 in Pb-Pb with 100 ns bunch crossings and L = 1030-10 31 cm-2 s -1 in pp with

25 ns bunch crossings).
The ALICE data acquisition system needs, in addition, to balance its capacity to record the steady stream of
very large events resulting from central collisions, with an ability to select and record rare cross-section
processes. These requirements result in an aggregate event building bandwidth of up to 2.5 GByte/s and a
storage capability of up to 1.25 GByte/s, giving a total of more than 1 PByte of data every year. As shown in
the figure, ALICE needs a data storage capacity that by far exceeds that of the current generation of
experiments. This data rate is equivalent to six times the contents of the Encyclopædia Britannica every
second.

aliceinfo.cern.ch/…/Chap2_DAQ.html

1/5
2009.10.29.

ALICE Data Acquisition

Architecture

Download the pdf file of the ALICE DAQ Architecture
The figure above shows the architecture of the ALICE trigger and data acquisition systems. For every bunch
crossing in the LHC machine, the Central Trigger Processor (CTP) decides within less than one microsecond
whether to collect the data resulting from a particular collision. The trigger decision is distributed to the frontend electronics (FEE) of each detector via the corresponding Local Trigger Unit (LTU) and an optical broadcast
system: the Trigger, Timing and Control system (TTC). Upon reception of a positive decision, the data are
transferred from the detectors over the 400 optical Detector Data Links (DDL) via PCI adapters (RORC) to a
farm of 300 individual computers; the Local Data Concentrator/Front-End Processors (LDC/FEP). The several
hundred different data fragments corresponding to the information from one event are checked for data
integrity, processed and assembled into sub events. These sub events are then sent over a network for the
event building to one of the 40 Global Data Collector computers (GDC), which can process up to 40 different
events in parallel. 20 Global Data Storage Servers (GDS) store the data locally before their migration and
archive in the CERN computing center where they become available for the offline analysis. The hardware of
the ALICE DAQ system is largely based on commodity components: PC's running Linux and standard
Ethernet switches for the eventbuilding network. The required performances are achieved by the
interconnection of hundreds of these PC's into a large DAQ fabric. The software framework of the ALICE DAQ
is called DATE (ALICE Data Acquisition and Test Environment). DATE is already in use today, during the
construction and testing phase of the experiment, while evolving gradually towards the final production
system.

DDL and RORC
The Detector Data Link (DDL) is the common hardware and protocol interface between the front-end
aliceinfo.cern.ch/…/Chap2_DAQ.html

2/5
2009.10.29.

ALICE Data Acquisition

electronics and the DAQ system. The DDL is used to transfer the raw physics data from the detectors to the
DAQ, and to control the detector front-end electronics or download data blocks to this electronics. The
current version of the DDL is based on electronics chips used for the 1 Gbit/s Fibre Channel physical layer
(Top picture). The next version is being developed with 2.5 Gbit/s electronics (Middle picture). The interface
between the DDL and the I/O bus of the Local Data Concentrator (LDC) is realized by the Read-Out Receiver
Card (RORC) (right picture). The current RORC is based on PCI 32 bits 33 MHz. It acts as a PCI master and is
using direct-memory access to the LDC memory. It reaches the maximum physical PCI speed (132 MByte/s)
as shown on the performance plot. The next RORC version will use PCI 64 bits 66 MHz.

DATE
The DATE framework is a distributed process-oriented system. It is designed to run on Unix platforms
connected by an IP-capable network and sharing a common file system such as NFS. It uses the standard
Unix system tools available for process synchronisation and data transmission. The DATE system performs
different functions:
The Local Data Concentrator (LDC) collects event fragments transferred by the DDL's into its main
memory and reassembles these event fragments into subevents. The LDC is also capable of doing local
data recording (if used in standalone mode).
The Global Data Collector (GDC) puts together all the sub-events pertaining to the same physics event,
builds the full events and archives them to the mass storage system. The Event Building and
Distribution System (EBDS) is balancing the load amongst the GDC's.
The DATE run-control controls and synchronises the processes running in the LDCs and the GDCs.
The monitoring programs receive data from LDCs or GDCs streams. They can be executed on any LDC,
GDC or any other machine accessible via the network. DATE includes interfaces with the Trigger and the
HLT systems.

AFFAIR
AFFAIR (A Flexible Fabric and Application Information Recorder) is the performance monitoring software
developed by the ALICE Data Acquisition project. AFFAIR is largely based on open source code and is
composed of the following components: data gathering, inter-node communication employing DIM, fast and
temporary round robin database storage, and permanent storage and plot generation using ROOT. Real time
data is monitored via a PHP generated web interface. AFFAIR is successfully used during the ALICE Data
Challenges. It is monitoring up to one hundred nodes and generating thousands of plots, accessible on the
web.

STORAGE
The ALICE experiment Mass Storage System (MSS) will have to combine a very high bandwidth (1.25
GByte/s) and the capacity to store huge amounts of data, more than 1 Pbytes every year. The mass storage
system is made of:
Global Data Storage (GDS) performing the temporary storage of data at the experimental pit;
Permanent Data Storage (PDS) for long-term archive of data in the CERN Computing Center;
The Mass Storage System software managing the creation, the access and the archive of data.
Several disk technologies are being tested by the ALICE DAQ for the GDS: standard disk storage, Network
Attached Storage (NAS) and Storage Area Network (NAS). The current baseline for the PDS is to use several
magnetic tape devices in parallel to reach the desired bandwidth. A tape robot is coupled with the tape
devices to realize the automatic mounting and dismounting of the tapes. The MSS software is the CASTOR
system designed and developed in the CERN/IT division.

aliceinfo.cern.ch/…/Chap2_DAQ.html

3/5
2009.10.29.

ALICE Data Acquisition

DATA CHALLENGE
Since 1998, the ALICE experiment and the CERN/IT division have jointly executed several large-scale high
throughput distributed computing exercises: the ALICE Data Challenges (ADC). The goals of these regular
exercises are to test hardware and software components of the data acquisition and computing systems in
realistic conditions and to execute an early integration of the overall ALICE computing infrastructure. The
fourth ALICE Data Challenge (ADC IV) has been performed at CERN in 2002. DATE has demonstrated
aggregate performances of more than 1 GByte/s (top figure). The data throughput to the disk server has
reached 350 MByte/s (middle figure) and the goal is to reach 200 MBytes/s to tape. The bottom figure shows
the consequence of the load balancing on the number of events built on different GDC's.

SIMULATION
The goals of the simulation of the Trigger, DAQ and HLT systems design are to verify the overall system
design and to evaluate the performances of the experiment for a set of realistic data taking scenarios.
The ALICE experiment has therefore been decomposed into a set of components and its functionality has
been formally specified. The Trigger/DAQ/HLT simulation includes a model of the whole experiment and of the
major sub-systems: Trigger, Trigger Detectors, Tracking Detectors, DAQ, HLT and Permanent Data Storage.
The full simulation involves thousands of independent units representing the ALICE components and
simulated in parallel. The performances of the existing prototypes of components have been measured and
the results used as input parameters for the simulation program. The simulation allows the system behaviour
to be tested under different conditions, and thus finding possible bottlenecks and alternative design
solutions.
The simulation has, for example, been used extensively to verify that the Trigger, DAQ and HLT systems are
able to preserve the majority of rare triggers that could be measured by the ALICE experiment. It has
required the addition to the DAQ of a mechanism, that reserves enough detector lifetime to allocate periods
of time to rare triggers.

aliceinfo.cern.ch/…/Chap2_DAQ.html

4/5
2009.10.29.

ALICE Data Acquisition

The figures show the simulated evolution of three major parameters (top: LDC buffer occupancy, middle:
trigger level 2 rate, bottom: fraction of bandwidth to mass storage) before and after (left and right columns)
the addition of this mechanism for rare triggers. The ALICE Trigger and DAQ simulation program is based on
the Ptolemy hierarchical environment, which is an open and free software tool developed at Berkeley.

More information about the ALICE Data Acquisition
DAQ web page
Technical Design Report
Publications
Pictures of the DAQ
DAQ General Poster
Industrial Award to Quantum Corp.
CERN Weekly articles
CERN Courier articles

Copyright CERN 2008 - ALIC E C ollabo ra tion

aliceinfo.cern.ch/…/Chap2_DAQ.html

5/5

More Related Content

What's hot

Barcelona Supercomputing Center, Generador de Riqueza
Barcelona Supercomputing Center, Generador de RiquezaBarcelona Supercomputing Center, Generador de Riqueza
Barcelona Supercomputing Center, Generador de RiquezaFacultad de Informática UCM
 
Energy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic TuningEnergy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic Tuninginside-BigData.com
 
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...Larry Smarr
 
Oracle cloud environment architecture orientation
Oracle cloud environment  architecture orientationOracle cloud environment  architecture orientation
Oracle cloud environment architecture orientationOsama Abdullah
 
Software development for the COMPASS experiment
Software development for the COMPASS experimentSoftware development for the COMPASS experiment
Software development for the COMPASS experimentbodlosh
 
Trends in Systems and How to Get Efficient Performance
Trends in Systems and How to Get Efficient PerformanceTrends in Systems and How to Get Efficient Performance
Trends in Systems and How to Get Efficient Performanceinside-BigData.com
 
Energy-aware VM Allocation on An Opportunistic Cloud Infrastructure
Energy-aware VM Allocation on An Opportunistic Cloud InfrastructureEnergy-aware VM Allocation on An Opportunistic Cloud Infrastructure
Energy-aware VM Allocation on An Opportunistic Cloud InfrastructureMario Jose Villamizar Cano
 
Big Fast Data in High-Energy Particle Physics
Big Fast Data in High-Energy Particle PhysicsBig Fast Data in High-Energy Particle Physics
Big Fast Data in High-Energy Particle PhysicsAndrew Lowe
 
Method and apparatus for transporting parcels of data using network elements ...
Method and apparatus for transporting parcels of data using network elements ...Method and apparatus for transporting parcels of data using network elements ...
Method and apparatus for transporting parcels of data using network elements ...Tal Lavian Ph.D.
 
DATE 2020: Design, Automation and Test in Europe Conference
DATE 2020: Design, Automation and Test in Europe ConferenceDATE 2020: Design, Automation and Test in Europe Conference
DATE 2020: Design, Automation and Test in Europe ConferenceLEGATO project
 
Oracle Autonomous Health Framework (AHF) White Paper
Oracle Autonomous Health Framework (AHF) White PaperOracle Autonomous Health Framework (AHF) White Paper
Oracle Autonomous Health Framework (AHF) White PaperAnkita Khandelwal
 
Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...inside-BigData.com
 
Monitoring of Transmission and Distribution Grids using PMUs
Monitoring of Transmission and Distribution Grids using PMUsMonitoring of Transmission and Distribution Grids using PMUs
Monitoring of Transmission and Distribution Grids using PMUsLuigi Vanfretti
 
15 Troubleshooting Tips and Tricks for database 21c - OGBEMEA KSAOUG
15 Troubleshooting Tips and Tricks for database 21c - OGBEMEA KSAOUG15 Troubleshooting Tips and Tricks for database 21c - OGBEMEA KSAOUG
15 Troubleshooting Tips and Tricks for database 21c - OGBEMEA KSAOUGSandesh Rao
 
Integration of real time software modules for reconfigurable sens
Integration of real time software modules for reconfigurable sensIntegration of real time software modules for reconfigurable sens
Integration of real time software modules for reconfigurable sensPham Ngoc Long
 
Iris: Inter-cloud Resource Integration System for Elastic Cloud Data Center
Iris: Inter-cloud Resource Integration System for Elastic Cloud Data CenterIris: Inter-cloud Resource Integration System for Elastic Cloud Data Center
Iris: Inter-cloud Resource Integration System for Elastic Cloud Data CenterRyousei Takano
 

What's hot (20)

Telegraph Cq English
Telegraph Cq EnglishTelegraph Cq English
Telegraph Cq English
 
Barcelona Supercomputing Center, Generador de Riqueza
Barcelona Supercomputing Center, Generador de RiquezaBarcelona Supercomputing Center, Generador de Riqueza
Barcelona Supercomputing Center, Generador de Riqueza
 
Exascale Capabl
Exascale CapablExascale Capabl
Exascale Capabl
 
Energy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic TuningEnergy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic Tuning
 
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...
 
Oracle cloud environment architecture orientation
Oracle cloud environment  architecture orientationOracle cloud environment  architecture orientation
Oracle cloud environment architecture orientation
 
Software development for the COMPASS experiment
Software development for the COMPASS experimentSoftware development for the COMPASS experiment
Software development for the COMPASS experiment
 
Kz2418571860
Kz2418571860Kz2418571860
Kz2418571860
 
Trends in Systems and How to Get Efficient Performance
Trends in Systems and How to Get Efficient PerformanceTrends in Systems and How to Get Efficient Performance
Trends in Systems and How to Get Efficient Performance
 
Energy-aware VM Allocation on An Opportunistic Cloud Infrastructure
Energy-aware VM Allocation on An Opportunistic Cloud InfrastructureEnergy-aware VM Allocation on An Opportunistic Cloud Infrastructure
Energy-aware VM Allocation on An Opportunistic Cloud Infrastructure
 
Big Fast Data in High-Energy Particle Physics
Big Fast Data in High-Energy Particle PhysicsBig Fast Data in High-Energy Particle Physics
Big Fast Data in High-Energy Particle Physics
 
Method and apparatus for transporting parcels of data using network elements ...
Method and apparatus for transporting parcels of data using network elements ...Method and apparatus for transporting parcels of data using network elements ...
Method and apparatus for transporting parcels of data using network elements ...
 
DATE 2020: Design, Automation and Test in Europe Conference
DATE 2020: Design, Automation and Test in Europe ConferenceDATE 2020: Design, Automation and Test in Europe Conference
DATE 2020: Design, Automation and Test in Europe Conference
 
Oracle Autonomous Health Framework (AHF) White Paper
Oracle Autonomous Health Framework (AHF) White PaperOracle Autonomous Health Framework (AHF) White Paper
Oracle Autonomous Health Framework (AHF) White Paper
 
Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...
 
Monitoring of Transmission and Distribution Grids using PMUs
Monitoring of Transmission and Distribution Grids using PMUsMonitoring of Transmission and Distribution Grids using PMUs
Monitoring of Transmission and Distribution Grids using PMUs
 
GRID COMPUTING
GRID COMPUTINGGRID COMPUTING
GRID COMPUTING
 
15 Troubleshooting Tips and Tricks for database 21c - OGBEMEA KSAOUG
15 Troubleshooting Tips and Tricks for database 21c - OGBEMEA KSAOUG15 Troubleshooting Tips and Tricks for database 21c - OGBEMEA KSAOUG
15 Troubleshooting Tips and Tricks for database 21c - OGBEMEA KSAOUG
 
Integration of real time software modules for reconfigurable sens
Integration of real time software modules for reconfigurable sensIntegration of real time software modules for reconfigurable sens
Integration of real time software modules for reconfigurable sens
 
Iris: Inter-cloud Resource Integration System for Elastic Cloud Data Center
Iris: Inter-cloud Resource Integration System for Elastic Cloud Data CenterIris: Inter-cloud Resource Integration System for Elastic Cloud Data Center
Iris: Inter-cloud Resource Integration System for Elastic Cloud Data Center
 

Viewers also liked

Cern lhc alice daq siu design
Cern lhc alice daq siu designCern lhc alice daq siu design
Cern lhc alice daq siu designBertalan EGED
 
Cern lhc alice daq data link
Cern lhc alice daq data linkCern lhc alice daq data link
Cern lhc alice daq data linkBertalan EGED
 
Cern lhc alice daq siu gtrx01
Cern lhc alice daq siu gtrx01Cern lhc alice daq siu gtrx01
Cern lhc alice daq siu gtrx01Bertalan EGED
 
Cern lhc alice daq siu grtx03
Cern lhc alice daq siu grtx03Cern lhc alice daq siu grtx03
Cern lhc alice daq siu grtx03Bertalan EGED
 
Eged electro salon 100504
Eged electro salon 100504Eged electro salon 100504
Eged electro salon 100504Bertalan EGED
 
SDR for radar 090623
SDR for radar 090623SDR for radar 090623
SDR for radar 090623Bertalan EGED
 

Viewers also liked (9)

Cern lhc alice daq siu design
Cern lhc alice daq siu designCern lhc alice daq siu design
Cern lhc alice daq siu design
 
Sdr
SdrSdr
Sdr
 
Cern lhc alice daq data link
Cern lhc alice daq data linkCern lhc alice daq data link
Cern lhc alice daq data link
 
Cern lhc alice daq siu gtrx01
Cern lhc alice daq siu gtrx01Cern lhc alice daq siu gtrx01
Cern lhc alice daq siu gtrx01
 
Cern lhc alice daq siu grtx03
Cern lhc alice daq siu grtx03Cern lhc alice daq siu grtx03
Cern lhc alice daq siu grtx03
 
Eged electro salon 100504
Eged electro salon 100504Eged electro salon 100504
Eged electro salon 100504
 
Srv p18-intro-v30
Srv p18-intro-v30Srv p18-intro-v30
Srv p18-intro-v30
 
Srv p18-intro-v30
Srv p18-intro-v30Srv p18-intro-v30
Srv p18-intro-v30
 
SDR for radar 090623
SDR for radar 090623SDR for radar 090623
SDR for radar 090623
 

Similar to ALICE DAQ Handles Large Data Volumes From LHC Collisions

AndreaPetrucci_ACAT_2007
AndreaPetrucci_ACAT_2007AndreaPetrucci_ACAT_2007
AndreaPetrucci_ACAT_2007Andrea PETRUCCI
 
A NETWORK-BASED DAC OPTIMIZATION PROTOTYPE SOFTWARE 2 (1).pdf
A NETWORK-BASED DAC OPTIMIZATION PROTOTYPE SOFTWARE 2 (1).pdfA NETWORK-BASED DAC OPTIMIZATION PROTOTYPE SOFTWARE 2 (1).pdf
A NETWORK-BASED DAC OPTIMIZATION PROTOTYPE SOFTWARE 2 (1).pdfSaiReddy794166
 
TeraGrid Communication and Computation
TeraGrid Communication and ComputationTeraGrid Communication and Computation
TeraGrid Communication and ComputationTal Lavian Ph.D.
 
Soft Error Study of ARM SoC at 28 Nanometers
Soft Error Study of ARM SoC at 28 NanometersSoft Error Study of ARM SoC at 28 Nanometers
Soft Error Study of ARM SoC at 28 NanometersWojciech Koszek
 
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)theijes
 
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 4 | April 2014 ...
    | IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 4 | April 2014 ...    | IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 4 | April 2014 ...
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 4 | April 2014 ...IJMER
 
OPTIMIZING END-TO-END BIG DATA TRANSFERS OVER TERABITS NETWORK INFRASTRUCTURE
OPTIMIZING END-TO-END BIG DATA TRANSFERS OVER TERABITS NETWORK INFRASTRUCTUREOPTIMIZING END-TO-END BIG DATA TRANSFERS OVER TERABITS NETWORK INFRASTRUCTURE
OPTIMIZING END-TO-END BIG DATA TRANSFERS OVER TERABITS NETWORK INFRASTRUCTURENexgen Technology
 
SummerStudentReport-HamzaZafar
SummerStudentReport-HamzaZafarSummerStudentReport-HamzaZafar
SummerStudentReport-HamzaZafarHamza Zafar
 
Programmable Exascale Supercomputer
Programmable Exascale SupercomputerProgrammable Exascale Supercomputer
Programmable Exascale SupercomputerSagar Dolas
 
Using the Open Science Data Cloud for Data Science Research
Using the Open Science Data Cloud for Data Science ResearchUsing the Open Science Data Cloud for Data Science Research
Using the Open Science Data Cloud for Data Science ResearchRobert Grossman
 
ES52_Waite_Riley_Poster
ES52_Waite_Riley_PosterES52_Waite_Riley_Poster
ES52_Waite_Riley_PosterRiley Waite
 
Sector - Presentation at Cloud Computing & Its Applications 2009
Sector - Presentation at Cloud Computing & Its Applications 2009Sector - Presentation at Cloud Computing & Its Applications 2009
Sector - Presentation at Cloud Computing & Its Applications 2009Robert Grossman
 

Similar to ALICE DAQ Handles Large Data Volumes From LHC Collisions (20)

cpc-152-2-2003
cpc-152-2-2003cpc-152-2-2003
cpc-152-2-2003
 
Grid computing & its applications
Grid computing & its applicationsGrid computing & its applications
Grid computing & its applications
 
AndreaPetrucci_ACAT_2007
AndreaPetrucci_ACAT_2007AndreaPetrucci_ACAT_2007
AndreaPetrucci_ACAT_2007
 
Abstract
AbstractAbstract
Abstract
 
Dosenet_Report
Dosenet_ReportDosenet_Report
Dosenet_Report
 
A NETWORK-BASED DAC OPTIMIZATION PROTOTYPE SOFTWARE 2 (1).pdf
A NETWORK-BASED DAC OPTIMIZATION PROTOTYPE SOFTWARE 2 (1).pdfA NETWORK-BASED DAC OPTIMIZATION PROTOTYPE SOFTWARE 2 (1).pdf
A NETWORK-BASED DAC OPTIMIZATION PROTOTYPE SOFTWARE 2 (1).pdf
 
TeraGrid Communication and Computation
TeraGrid Communication and ComputationTeraGrid Communication and Computation
TeraGrid Communication and Computation
 
TransPAC3/ACE Measurement & PerfSONAR Update
TransPAC3/ACE Measurement & PerfSONAR UpdateTransPAC3/ACE Measurement & PerfSONAR Update
TransPAC3/ACE Measurement & PerfSONAR Update
 
Soft Error Study of ARM SoC at 28 Nanometers
Soft Error Study of ARM SoC at 28 NanometersSoft Error Study of ARM SoC at 28 Nanometers
Soft Error Study of ARM SoC at 28 Nanometers
 
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)
 
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 4 | April 2014 ...
    | IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 4 | April 2014 ...    | IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 4 | April 2014 ...
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 4 | April 2014 ...
 
OPTIMIZING END-TO-END BIG DATA TRANSFERS OVER TERABITS NETWORK INFRASTRUCTURE
OPTIMIZING END-TO-END BIG DATA TRANSFERS OVER TERABITS NETWORK INFRASTRUCTUREOPTIMIZING END-TO-END BIG DATA TRANSFERS OVER TERABITS NETWORK INFRASTRUCTURE
OPTIMIZING END-TO-END BIG DATA TRANSFERS OVER TERABITS NETWORK INFRASTRUCTURE
 
SummerStudentReport-HamzaZafar
SummerStudentReport-HamzaZafarSummerStudentReport-HamzaZafar
SummerStudentReport-HamzaZafar
 
GRID COMPUTING.ppt
GRID COMPUTING.pptGRID COMPUTING.ppt
GRID COMPUTING.ppt
 
Programmable Exascale Supercomputer
Programmable Exascale SupercomputerProgrammable Exascale Supercomputer
Programmable Exascale Supercomputer
 
Overview of the Data Processing Error Analysis System (DPEAS)
Overview of the Data Processing Error Analysis System (DPEAS)Overview of the Data Processing Error Analysis System (DPEAS)
Overview of the Data Processing Error Analysis System (DPEAS)
 
Grid Computing
Grid ComputingGrid Computing
Grid Computing
 
Using the Open Science Data Cloud for Data Science Research
Using the Open Science Data Cloud for Data Science ResearchUsing the Open Science Data Cloud for Data Science Research
Using the Open Science Data Cloud for Data Science Research
 
ES52_Waite_Riley_Poster
ES52_Waite_Riley_PosterES52_Waite_Riley_Poster
ES52_Waite_Riley_Poster
 
Sector - Presentation at Cloud Computing & Its Applications 2009
Sector - Presentation at Cloud Computing & Its Applications 2009Sector - Presentation at Cloud Computing & Its Applications 2009
Sector - Presentation at Cloud Computing & Its Applications 2009
 

More from Bertalan EGED

More from Bertalan EGED (20)

Sdr u
Sdr uSdr u
Sdr u
 
Sdr p
Sdr pSdr p
Sdr p
 
Sdr 2
Sdr 2Sdr 2
Sdr 2
 
Sdr 1
Sdr 1Sdr 1
Sdr 1
 
Radar
RadarRadar
Radar
 
Mddf
MddfMddf
Mddf
 
Projects interjam
Projects interjamProjects interjam
Projects interjam
 
Projects elant
Projects elantProjects elant
Projects elant
 
Commsim
CommsimCommsim
Commsim
 
Poster receivers-080210
Poster receivers-080210Poster receivers-080210
Poster receivers-080210
 
Poster receiver platform-080210
Poster receiver platform-080210Poster receiver platform-080210
Poster receiver platform-080210
 
Poster products-080210
Poster products-080210Poster products-080210
Poster products-080210
 
Poster frontends-070624
Poster frontends-070624Poster frontends-070624
Poster frontends-070624
 
Poster frequency exetnsion-080210
Poster frequency exetnsion-080210Poster frequency exetnsion-080210
Poster frequency exetnsion-080210
 
Poster direction finders-080210
Poster direction finders-080210Poster direction finders-080210
Poster direction finders-080210
 
Poster digital-070624
Poster digital-070624Poster digital-070624
Poster digital-070624
 
Sagax fdm demux
Sagax fdm demuxSagax fdm demux
Sagax fdm demux
 
Sagax fdm demux
Sagax fdm demuxSagax fdm demux
Sagax fdm demux
 
DF receiver GUI for FH detection
DF receiver GUI for FH detectionDF receiver GUI for FH detection
DF receiver GUI for FH detection
 
Project poster: SDR universal platform
Project poster: SDR universal platformProject poster: SDR universal platform
Project poster: SDR universal platform
 

Recently uploaded

The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfSeasiaInfotech2
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesZilliz
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 

Recently uploaded (20)

The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdf
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector Databases
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 

ALICE DAQ Handles Large Data Volumes From LHC Collisions

  • 1. 2009.10.29. ALICE Data Acquisition ALICE Data Acquisition The Large Hadron Collider (LHC) will make protons or ions collide not only at a much higher energy but also at a much larger rate than ever before. To digest the resulting wealth of information, the four LHC experiments have to push data handling technology well beyond the current state-of-the-art, be it in trigger rates, data acquisition bandwidth or data archive. ALICE, the experiment dedicated to the study of nucleusnucleus collisions, had to design a data acquisition system that operates efficiently in two widely different running modes: the very frequent but small events, with few produced particles encountered in the pp mode, and the relatively rare, but extremely large events, with tens of thousands of new particles produced in ion operation (L = 1027 cm-2 s -1 in Pb-Pb with 100 ns bunch crossings and L = 1030-10 31 cm-2 s -1 in pp with 25 ns bunch crossings). The ALICE data acquisition system needs, in addition, to balance its capacity to record the steady stream of very large events resulting from central collisions, with an ability to select and record rare cross-section processes. These requirements result in an aggregate event building bandwidth of up to 2.5 GByte/s and a storage capability of up to 1.25 GByte/s, giving a total of more than 1 PByte of data every year. As shown in the figure, ALICE needs a data storage capacity that by far exceeds that of the current generation of experiments. This data rate is equivalent to six times the contents of the Encyclopædia Britannica every second. aliceinfo.cern.ch/…/Chap2_DAQ.html 1/5
  • 2. 2009.10.29. ALICE Data Acquisition Architecture Download the pdf file of the ALICE DAQ Architecture The figure above shows the architecture of the ALICE trigger and data acquisition systems. For every bunch crossing in the LHC machine, the Central Trigger Processor (CTP) decides within less than one microsecond whether to collect the data resulting from a particular collision. The trigger decision is distributed to the frontend electronics (FEE) of each detector via the corresponding Local Trigger Unit (LTU) and an optical broadcast system: the Trigger, Timing and Control system (TTC). Upon reception of a positive decision, the data are transferred from the detectors over the 400 optical Detector Data Links (DDL) via PCI adapters (RORC) to a farm of 300 individual computers; the Local Data Concentrator/Front-End Processors (LDC/FEP). The several hundred different data fragments corresponding to the information from one event are checked for data integrity, processed and assembled into sub events. These sub events are then sent over a network for the event building to one of the 40 Global Data Collector computers (GDC), which can process up to 40 different events in parallel. 20 Global Data Storage Servers (GDS) store the data locally before their migration and archive in the CERN computing center where they become available for the offline analysis. The hardware of the ALICE DAQ system is largely based on commodity components: PC's running Linux and standard Ethernet switches for the eventbuilding network. The required performances are achieved by the interconnection of hundreds of these PC's into a large DAQ fabric. The software framework of the ALICE DAQ is called DATE (ALICE Data Acquisition and Test Environment). DATE is already in use today, during the construction and testing phase of the experiment, while evolving gradually towards the final production system. DDL and RORC The Detector Data Link (DDL) is the common hardware and protocol interface between the front-end aliceinfo.cern.ch/…/Chap2_DAQ.html 2/5
  • 3. 2009.10.29. ALICE Data Acquisition electronics and the DAQ system. The DDL is used to transfer the raw physics data from the detectors to the DAQ, and to control the detector front-end electronics or download data blocks to this electronics. The current version of the DDL is based on electronics chips used for the 1 Gbit/s Fibre Channel physical layer (Top picture). The next version is being developed with 2.5 Gbit/s electronics (Middle picture). The interface between the DDL and the I/O bus of the Local Data Concentrator (LDC) is realized by the Read-Out Receiver Card (RORC) (right picture). The current RORC is based on PCI 32 bits 33 MHz. It acts as a PCI master and is using direct-memory access to the LDC memory. It reaches the maximum physical PCI speed (132 MByte/s) as shown on the performance plot. The next RORC version will use PCI 64 bits 66 MHz. DATE The DATE framework is a distributed process-oriented system. It is designed to run on Unix platforms connected by an IP-capable network and sharing a common file system such as NFS. It uses the standard Unix system tools available for process synchronisation and data transmission. The DATE system performs different functions: The Local Data Concentrator (LDC) collects event fragments transferred by the DDL's into its main memory and reassembles these event fragments into subevents. The LDC is also capable of doing local data recording (if used in standalone mode). The Global Data Collector (GDC) puts together all the sub-events pertaining to the same physics event, builds the full events and archives them to the mass storage system. The Event Building and Distribution System (EBDS) is balancing the load amongst the GDC's. The DATE run-control controls and synchronises the processes running in the LDCs and the GDCs. The monitoring programs receive data from LDCs or GDCs streams. They can be executed on any LDC, GDC or any other machine accessible via the network. DATE includes interfaces with the Trigger and the HLT systems. AFFAIR AFFAIR (A Flexible Fabric and Application Information Recorder) is the performance monitoring software developed by the ALICE Data Acquisition project. AFFAIR is largely based on open source code and is composed of the following components: data gathering, inter-node communication employing DIM, fast and temporary round robin database storage, and permanent storage and plot generation using ROOT. Real time data is monitored via a PHP generated web interface. AFFAIR is successfully used during the ALICE Data Challenges. It is monitoring up to one hundred nodes and generating thousands of plots, accessible on the web. STORAGE The ALICE experiment Mass Storage System (MSS) will have to combine a very high bandwidth (1.25 GByte/s) and the capacity to store huge amounts of data, more than 1 Pbytes every year. The mass storage system is made of: Global Data Storage (GDS) performing the temporary storage of data at the experimental pit; Permanent Data Storage (PDS) for long-term archive of data in the CERN Computing Center; The Mass Storage System software managing the creation, the access and the archive of data. Several disk technologies are being tested by the ALICE DAQ for the GDS: standard disk storage, Network Attached Storage (NAS) and Storage Area Network (NAS). The current baseline for the PDS is to use several magnetic tape devices in parallel to reach the desired bandwidth. A tape robot is coupled with the tape devices to realize the automatic mounting and dismounting of the tapes. The MSS software is the CASTOR system designed and developed in the CERN/IT division. aliceinfo.cern.ch/…/Chap2_DAQ.html 3/5
  • 4. 2009.10.29. ALICE Data Acquisition DATA CHALLENGE Since 1998, the ALICE experiment and the CERN/IT division have jointly executed several large-scale high throughput distributed computing exercises: the ALICE Data Challenges (ADC). The goals of these regular exercises are to test hardware and software components of the data acquisition and computing systems in realistic conditions and to execute an early integration of the overall ALICE computing infrastructure. The fourth ALICE Data Challenge (ADC IV) has been performed at CERN in 2002. DATE has demonstrated aggregate performances of more than 1 GByte/s (top figure). The data throughput to the disk server has reached 350 MByte/s (middle figure) and the goal is to reach 200 MBytes/s to tape. The bottom figure shows the consequence of the load balancing on the number of events built on different GDC's. SIMULATION The goals of the simulation of the Trigger, DAQ and HLT systems design are to verify the overall system design and to evaluate the performances of the experiment for a set of realistic data taking scenarios. The ALICE experiment has therefore been decomposed into a set of components and its functionality has been formally specified. The Trigger/DAQ/HLT simulation includes a model of the whole experiment and of the major sub-systems: Trigger, Trigger Detectors, Tracking Detectors, DAQ, HLT and Permanent Data Storage. The full simulation involves thousands of independent units representing the ALICE components and simulated in parallel. The performances of the existing prototypes of components have been measured and the results used as input parameters for the simulation program. The simulation allows the system behaviour to be tested under different conditions, and thus finding possible bottlenecks and alternative design solutions. The simulation has, for example, been used extensively to verify that the Trigger, DAQ and HLT systems are able to preserve the majority of rare triggers that could be measured by the ALICE experiment. It has required the addition to the DAQ of a mechanism, that reserves enough detector lifetime to allocate periods of time to rare triggers. aliceinfo.cern.ch/…/Chap2_DAQ.html 4/5
  • 5. 2009.10.29. ALICE Data Acquisition The figures show the simulated evolution of three major parameters (top: LDC buffer occupancy, middle: trigger level 2 rate, bottom: fraction of bandwidth to mass storage) before and after (left and right columns) the addition of this mechanism for rare triggers. The ALICE Trigger and DAQ simulation program is based on the Ptolemy hierarchical environment, which is an open and free software tool developed at Berkeley. More information about the ALICE Data Acquisition DAQ web page Technical Design Report Publications Pictures of the DAQ DAQ General Poster Industrial Award to Quantum Corp. CERN Weekly articles CERN Courier articles Copyright CERN 2008 - ALIC E C ollabo ra tion aliceinfo.cern.ch/…/Chap2_DAQ.html 5/5