Carbon Footprint of the LHC
I did my bachelor’s in Vietnam with a thesis on a theoretical 5D gravitational theory. Infrastructure for experimental physics in Vietnam compared to Europe or the US is like me sparring with Mayweather. Probably it’s the reason why all the top physics students from Vietnam graduate as theoretical physicists. LHC changed me since the time I was an internship student in the CERN’s Summer Student program. It was only two months but enough of a push that shifted me from a theorist to an experimentalist. Needless to say, LHC changed my career path.
The LHC! The grandest wonder of the modern world! Expensive! Huge! But…
Is the LHC worth it?
Yes, of course! The Higgs boson is the most significant scientific discovery of the 21st century, and probably LHC generated more PhD-level data scientists than any institution in the world. Sabine may think LHC is not worth it, I think it does – academically, technologically, and economically, but sustainability?
For a few months, I’m getting more environmentally conscious and acknowledge how much impact the food system, the daily energy consumption, the commuting has on our ecosystem. As an increasing number of people are becoming more aware of the environmental consequences of their actions. The carbon footprint measure is becoming a second “price tag” that we have to pay for each of our actions (unless you don’t care about it – “ignorance is bliss 😇”). So the question for today is:
“How extensive is the carbon footprint from the LHC?”
A simple analogy is a car. The carbon footprint of a car is both from the manufacturing of itself and during the operation. For the LHC, the total carbon footprint is from the construction of the LHC, energy consumption, and storage of the data from the four detectors of the LHC. Let’s have a first number rundown on the obvious one first, the energy consumption.
🏗 🛠 Construction Carbon Footprint 🛠 🏗
Disclaimer: This is a naive comparison. An SUV weighs 2 tons. Manufacturing an SUV produces an amount of 35 tons of CO2. The LHC weighs 38000 tons, so construction of the LHC produces an amount of 0.6 million tons of CO2. Obviously, the comparison is not fair because of the incredible complexity of the LHC, unlike an SUV from the assembly line of mass production. The LHC uses a considerable amount of greenhouse gas for its cryogenic magnet that is 27 km long unlike the car use water and air for their engine cooling.
⚡️⚡️ Energy Consumption Carborn Footprint ⚡️⚡️
LHC is a more-than-hefty super engine that consumes nearly 1/4 of the total electric usage of the canton Geneve.
- Run 1 (2009-2013): 600 GWh per year, with the peak at 650 GWh in 2012.
- Run 2 (2015-2018): 750 GWh per year.
For simplicity, I will ignore the long shutdown between 2013-2015. Since the shutdown is mainly related to the upgrade and manufacturing, the shutdown period’s energy consumption carbon footprint is included in the previous section on the carbon footprint from the LHC’s construction. The total energy consumption of LHC after Run 1 and Run 2 is 4.7 TWh (FYI: Bitcoin mining globally is 130 TWh).
In Switzerland, 1 kWh of energy produced comes along with the carbon footprint price tag of 0.13 kg (global 0.47 kg/kWh, Vietnam 0.22 kg/kWh). Cha-ching! We got the first number. The energy consumption of the LHC during run I and run II is 0.6 million tons CO2.
🖥 🖥 Data Storage Carbon Footprint 🖥 🖥
The LHC produces approximately 600 billion proton-proton collisions per second at the ATLAS and the CMS. However, the trigger system (or recorder) of the four detectors only collects information of 200 collisions out of those billions. Each collision produced bunches of muons, photons,s and trackable particles. The recorded dynamic information of these particles is 1 megabyte per collision. Two 10-hour shifts per day, 300 days per year. The total data that have to be stored per year is close to 15 PB. The CERN Advance Storage System now stores more than 330 PB of data solely from the LHC.
The data is stored in the data center of CERN. Supposedly it would be a simple and streamline process of saving your data to a chunk of a thousand hard-drives. However, the data stored in the data center of CERN is under constant access and transferred to other computational clusters for data processing and analysis. As well as the cooling and power consumption. You can easily find the resemblance of the LHC’s data center to the data center for could service in the form of activities and data accesses. Since I have guided you through the calculation of the carbon footprint via power consumption, let me show you a different approach.
Saving and storing 100 gigabytes of data in the cloud per year would result in a carbon footprint of about 0.2 tons of CO2….
It’s a lot for the sake of your convenience. Will you keep binging Netflix after knowing that one hour of streaming costs 0.4 kg of CO2? Yes, of course, you will keep binging Netflix as well as the scientists will keep analyzing the data from LHC for any new physics they can think of. Cha-ching! It’s 66 million tons of CO2 for storing collision data.
💸 💸 Total carbon footprint of the LHC 💸 💸
You can see from this, 99% of LHC’s carbon footprint is mostly from data storage with 66 million tons of CO2. 66 million tons of CO2 is equivalent to the total emission of 825000 cars running in 20 years. The data center is the culprit here, as it always is. Of course, the technology of the data center is improving with higher efficiency in energy consumption. However, Jevons paradox proves the contrary. In Layman’s terms: higher efficiency or advancement in technology may lead to cutting necessary resources. However, the market demand will catch up and turn out it requires even more necessary resources.
During my Ph.D., I worked on a small-scale experiment (less than 1 million EUR in the budget). My task is to cover all the activities related to the experiment, from the setup of the apparatus, assembling the components, calibrate, taking data, and analyzing the data. One daunting question I have always been asking myself is which data is disposable or valuable. The total amount of raw data is 2 TB for my experiment, but the final valuable data that went into the final publication is just 80 MB. Of course, there are multiple steps of post-processing, from the raw data to the golden nugget. However, the 2TB raw data is now being stored in the data center because it is a good scientific practice.
Can we change this? Can we sustainably store the science data? This is an open question that I will explore in the near future.