The Environmental Impact of Cloud Computing
25 February 2024 | 7:23 pm

Ana Rodrigues too deleted her Spotify account, citing numerous valid reasons (see my You Shouldn’t Listen To Spotify post), of which one in particular stood out for me: Spotify is bad for the environment. She cites Natalia Waniczek’s FFConf 2022 talk where Natalia presented painful facts such as:

  1. If you were to listen to an album 27 times back to back, the printed CD would be more environmental friendly;
  2. Streaming Spotify for a working month emits as much CO2 per month as driving 32 km by an average vehicle.

I wonder how we can achieve what Natalia’s talk was all about—working towards a greener world from behind the keyboard, or, in other words, frugal computing. I mean, I know what the concept is about, and it’s quite obvious what needs to be done (less!), but in a world where everything needs to be upscaled instead of downscaled, who’s going to listen to the few voices swimming against the current?

At my current client, we’re leveraging Kubernetes’ isolated throwaway-pod system to auto-upscale and deploy clusters. Distributed—that is, cloud-based—enterprise solutions are without a single doubt less environmental friendly than their classic client-server counterparts: modern software is often upscaled too fast, and modern software is needlessly complex/distributed across multiple energy consumers. Environmentalism apparently is never a key decisive part of the engineering puzzle.

The most conspicuous example is of course (yet again) generative AI. A few days ago, Nature published an article proclaiming that Generative AI’s environmental costs are soaring—and mostly secret:

[…] Artificial intelligence (AI) industry is heading for an energy crisis. It’s an unusual admission. At the World Economic Forum’s annual meeting in Davos, Switzerland, Altman warned that the next wave of generative AI systems will consume vastly more power than expected, and that energy systems will struggle to cope. “There’s no way to get there without a breakthrough,” [said OpenAI chief executive Sam Altman].

Their solution? Certainly not downscaling. Instead, Altman invests heavily into nuclear fusion companies hoping for a breakthrough in the field of energy generation, not in the field of AI. That alone blew my mind. The article also contains a sobering message for people relying on these Large Language Models to find stuff instead of relying on good ol’ internet searches:

ChatGPT, the chatbot created by OpenAI in San Francisco, California, is already consuming the energy of 33,000 homes. It’s estimated that a search driven by generative AI uses four to five times the energy of a conventional web search. Within years, large AI systems are likely to need as much energy as entire nations.

Every time you ask ChatGPT something you could easily have found yourself using conventional methods, you’re burning through five times as much energy. Considering my students nowadays use ChatGPT almost exclusively, that’s insane. Of course OpenAI doesn’t really want you to know this, and even if you did, what else would you do but shrug and use the system anyway?

This isn’t just a political finger-pointing game; multiple peer-reviewed scientific publications have been warning us about the environmental impact of these systems for years; yet we chose to ignore them. I’m afraid one recent Nature article won’t exactly trigger a change of heart either. And no, these “optimized” and “shared” systems are far from optimal: once your data reaches your device, it has passed through multiple data center(s), the core network, edge network, access network, …

Just to name a few more:

[…] The comparative LCA showed that, for the example of global warming potential, the environmental emissions of digital distribution are ultimately higher if the digital distribution leads to more than 4x the amount of movie viewing.

The researchers summarized the total carbon footprint of the most popular applications by watching hours, and three different methods of estimating that footprint:

Vid. Activity Watch hours/day avg. CO2/hour
YouTube 1 B hours 162.33 g
Netflix 6 B hours 974.33 g
Facebook 0.1 B hours 1623 g
TikTok 2 B hours 243.83 g

The problem is that as these giants constantly grow, this is historical or stale data, thus accurate forecasting becomes more and more important.

Our model finds that streaming one hour of Netflix videos is roughly between 0.783–0.983 kWh of energy consumption depending on the number of simultaneous streams (10,000–50,000) being streamed from the data center. […] Two interesting comparisons are that one hour of Netflix streaming is equivalent to about 1 kg of CO2 or the burning of 0.17–0.21 kg of coal.

The researchers had difficulties interpolating energy data and end with an open call:

We call on industry (telecom, hosting, and streaming providers) to publish more factual numbers—not to remain secretive and silent. Allow researchers to verify and understand the impact of streaming on the environment.

Another hint at the concealment, number fiddling, and secrecy these large streaming providers tamper with, further clouding (ha!) the accurate calculation of the environmental impact of cloud computing. All these findings suggest it’s high time not just for transparency, but also for responsible digital downscaling.

By Wouter Groeneveld on 25 February 2024.  Reply via email.


Two Interesting Use Cases For LLMs
20 February 2024 | 7:11 pm

I’ve openly proclaimed my dislike for current trends in AI and Large Language Models (LLMs) before: it’s being misused to genereate crap to put on the internet and the availability of hallucinated crap makes students’ learning painfully worse. So I’ve been wondering: can these ChatGPT-like systems be put to any real use? I think the answer is yes: here are two possibly interesting use cases.

Case 1: Personal Knowledge Management

I have sixteen analog notebooks digitized in an Obsidian vault of which each linked note file has been painfully tagged by hand to quickly find related information, as OCR software still isn’t capable of decrypting my arcane scribbles. Heavy linking and creating ideas at the intersection of those links in true Zettelkasten style aside, wouldn’t it be awesome if I could locally spin up a LLM system that ingests my whole knowledge base, significantly easing (1) my ability to find my own notes and (2) querying that database in a natural way?

Are you thinking what I’m thinking, Tony Stark?

J.A.R.V.I.S., Tony Stark's intelligent AI butler in Iron Man.

Jarvis, which blog post ideas in the past month aren’t written yet? Jarvis, what goes in my secret gingerbread spice mix again? Jarvis, what were my thoughts on project x again?

Wouldn’t that be great? Things I most often look for in my Obsidian vault are:

  • Recipes: exact spice mixture percentages, comparisons of waffle recipes, etc;
  • TODOs and blog post idea templates;
  • Specific information on technologies I’ve once used but forgotten;
  • Photos by tag;
  • Contact information of people that I haven’t stored in CardDAV yet;
  • Summaries and mind maps of e.g. books or papers.

Looking for these things sometimes takes too much time because of my poor tagging skills, and it’s become increasingly hard to excessively link notes because of the ever increasing amount of notes. On top of that, I sometimes can’t read my own notes (bad scans, bad handwriting, too little context that I forgot, …). LLMs could potentially solve all these issues.

Of course the biggest concern here is privacy. I have zero interest in feeding my private notes to an existing system maintained by Silicon Valley tech boys, meaning I’d want to run and train it locally. I don’t want to make the required free time available to experiment with this as I know it’ll take a lot of fiddling and the bottleneck still is the analog to digital conversion as textual input still is minimal (no OCR and limited tags). And no, I won’t switch to digital note-taking.

I guess the Private Jarvis dream will stay a dream for now.

Case 2: Philosophical research

I’m interested in philosophy, but it’s a challenge to find what I’m looking for in ancient works. An example: I want to know what certain philosophers’ stances are on laziness, nostalgia, whatever. I have Montaigne’s Essais on my shelf but the index doesn’t provide an answer right away. A digital version enables CMD+F-ing to quickly find stuff but that still isn’t very productive. So I thought: what if we could ask these questions through LLMs or a specialized model that is exclusively fed philosophical works? Won’t that make it easier for me to find e.g. what Schopenhauer thinks of leisure time? Internet search engines do a poor job answering these questions nowadays.

I took https://anonchatgpt.com/ and https://www.bing.com/chat to the test. Input:

What does Schopenhauer think about poker playing?

Output, CoPilot:

Arthur Schopenhauer, the German philosopher, had some rather scathing views on card-playing, including poker. In his work “The Wisdom of Life and Other Essays,” he expressed the following opinion: “Hence, in all countries the chief occupation of society is card-playing, and it is the gauge of its value, and an outward sign that it is bankrupt in thought. Because people have no thoughts to deal in, they deal cards, and try and win one another’s moneyIdiots!”

Output, ChatGPT:

Schopenhauer believed that poker, like all forms of gambling, is a futile and reckless pursuit driven by the desire for quick and easy wealth. He saw it as a manifestation of the human tendency to seek pleasure and avoid pain without regard for the consequences. Schopenhauer would likely view poker playing as a form of escapism and distraction from the harsh realities of life, ultimately leading to disappointment and suffering.

At least CoPilot adds references (GoodReads & Wikipedia). Both forget to add that Schopenhauer himself was a big poker player, earning a good buck with the game he thought was for empty-headed losers. Or did he? When I ask CoPilot this, the AI denies it, but other AIs confirm it. As I have yet to work my way through all of his works, I don’t really know which LLM engine is hallucinating and which one is telling the truth.

I’ve been discussing the possibilities of using LLMs to engage in a philosophical conversation with Gibru who’s more at home when it comes to new (self-hosted) LLM tools, and while he thinks it can be a good way to bootstrap your philosophical thinking mode, he also warned me about the hallucination effects.

That means my use case also falls through: I wanted to use LLMs to quickly find facts, not have a discussion—although that might also be interesting just to get the ball rolling. Since I don’t know the facts yet, I have to believe every answer the system provides. Additionally, only when I run these locally, I can train them with the original sources, not with second-hand Wikipedia-style “sources”.


What would Epicurus think of the Nintendo Switch if he would be alive today?

The immersive gaming experience, captivating worlds, and engaging gameplay could align with Epicurus’s emphasis on seeking pleasurable experiences.

Yeah right. That sounds like yet another misinterpretation of Epicurus’ principles of ataraxia. I guess the two interesting use cases will stay just that: interesting philosophical thinking exercises.

By Wouter Groeneveld on 20 February 2024.  Reply via email.


25% Creating, 75% Hustling
17 February 2024 | 6:35 pm

In retrospect of several creative endeavors, on average, I feel like I spend 25% of my time creating, and 75% hustling. I don’t think that’s a healthy balance at all: it should be 60%+ creating and 40% or less hustling. Yet in this world where uninterrupted yelling is the norm to get your stuff to sell, it seems that I have little choice.

This ad campaign driven approach to do business has been bothering me for quite some time. I want to create, not to keep on poking people to take a look at my creations. I want to create, not to publish campaigns periodically and overflow other people’s feeds while sending money to Meta. I want to create, not worry about the black hole the creation will end up in if I don’t do enough of the hustling.

The first draft of my book The Creative Programmer took roughly three months to write. Before that, most of the thinking and idea-connecting work was already done. Then it took another good year to re-write it into an acceptable publishable form with the help of editors. For The Creative Programmer, I explicitly opted to work together with the publisher Manning because I had the illusion that the 75% hustling work would be handled by them. After all, I am indirectly paying them to do so as my royalty rates are, compared to indie publishing, laughably low.

I was wrong. Besides initial social media and standard website campaigns, I was expected to do the hustling myself. I was encouraged to attend conferences and submit talk proposals, but neither the ticket nor the accommodation costs would be reimbursed. Considering that on average I make €2.5 per book sale, I would need every single person in two rooms of 200 people to buy a copy in order to break even.

I put in more hustling work. I did interviews, participated in podcasts, posted on forums, wrote technical articles directly or indirectly promoting the book on tech news sites, and gave talks (locally!). None of that really bore any fruit. Since the early access release, My Q3 2023 report tells me that we shifted 2.056 units in total. That’s a low number for such a technology-agnostic book—in fact, I’m starting to doubt I’ll ever see another penny besides the initial advance.

Don’t get me wrong: I absolutely do not care about the money: I wrote it in 2022 as part of my paid PhD job. What does bother me though is despite the 75% hustling, efforts put in so far are generally wasted. And I hate hustling. This makes me think next time I just should care even less and definitely self-publish: if it’s out there, it’s out there, and if someone happens to find it, great, but if not, also great, at least then I can fully focus on the creation part and completely trash the dreaded hustling part.

That’s fine by me since my income doesn’t depend on those royalties, but what if they do? Then you’ll probably need to crank up that hustling percentage or at least learn how to play that game more efficiently—and learn to live with it instead of gradually resenting it.

The problem isn’t limited to my pathetic authorship attempt. A few years ago, my wife tried to make it as an independent jeweler, which here in Belgium is an incredibly crowded market dominated by cheap e-stores that maximize profit by importing junk from China combined with hiring entire remote Indian teams to do their dirty ad work for them. Needless to say, it is simply impossible to compete with them. Of course, hand-crafted silver jewelry at premium prices aren’t the same as cheap knock-off necklaces, but it’s easy to be intimidated by all the social media business screaming, even by other local jewelers that seem to want to play that game very eagerly. Two years later, she gave up in frustration.

In a local television show called Andermans Zaken, a well-known business expert interviews and helps all kinds of entrepreneurs who are facing an urgent problem: high expenses, little to no revenue. The number one solution, time and time again, is to launch an online ad campaign. At that point in the show, as I shake my head in disbelief, I zap to another channel.

I understand the basics of how economics work under the greedy watch of capitalism, but that doesn’t mean that I have to agree with its principles—especially when it comes to my urge to exert my Homo Faber.

I think I’m done shouting at the void. Perhaps I should instead let go of my urge to publish, (un)consciously somehow still expecting people to care.

By Wouter Groeneveld on 17 February 2024.  Reply via email.



More News from this Feed See Full Web Site