A new research found that training artificial intelligence (AI) models like GPT-3 in data centres may directly consume 7,00,000 gallons of pure fresh water (enough to produce 370 BMW automobiles or 320 Tesla electric vehicles).
According to the ‘Making AI Less Thirsty’ research paper, many AI models are trained and deployed on power-hungry servers housed inside warehouse-scale data centres, which are frequently referred to as energy hogs, with millions of litres of clean freshwater consumed for generating electricity to power data centre servers and cooling these servers.
Cooling the data centres increases the thirst of the AI chatbots.
According to the authors, the volume of fresh, clean water required to train GPT-3 is equivalent to the quantity of water required to fill a nuclear reactor’s cooling tower.
Furthermore, according to Gizmodo, OpenAI has not published details on the time required to train GPT-3, making it difficult for the researchers to predict.
Microsoft, which has collaborated with the AI startup and built supercomputers for AI training, claims that its latest supercomputer, which would require extensive cooling, contains 10,000 graphics cards and over 285,000 processor cores, revealing the vast scale of the operation behind artificial intelligence.
The researchers went on to say that ChatGPT requires a 500 ml bottle of water to ‘drink’ for a simple chat of 20-50 questions and responses, depending on when and where ChatGPT is used.
“While a 500 ml bottle of water might not seem too much, the total combined water footprint for inference is still extremely large, considering ChatGPT’s billions of users,” the researchers said.
Furthermore, the researchers noted that by employing a principled technique to estimate the fine-grained water footprint, they concretely demonstrated that AI models such as Google’s LaMDA may consume staggering amounts of water in the millions of litres range.