Categories
quotes ranting

Memory, Biography, or Identity

Most of what we do does not leave any traces in our memory, biography, or identity

Hartmut Rosa, interview with Claudio Gallo, published in LA Review of Books [lareviewofbooks.org], June 2015

I came across this quote in Productivity: Are We Okay? [youtube.com], a video by Wisecrack discussing if the drive for productivity and “hustle culture” is making us (or at least most of us) worse off: overstressed, depressed and burn out.

Almost nothing you do on a daily basis is going to be a core memory. Be it in the pursuit of work —be it your job or side hustle— or keeping up with the Joneses, or “bettering” yourself by reading all the latest and must read books, or whatever. In short “self optimization” or “grinding”.

I don’t grind, or spend effort self optimizing. In my “free time” I write this blog, read, watch movies, practice my photography, and do a little exercise. Or at least these are the things I define as my hobbies but finding the time to do any of that when I have two teenage kids… And I long ago gave up on achieving all of it, something had to give. So a lot of that lost out to the kids, they are the most important part.

Photography probably suffered the most. I used to go out for the express purpose of taking photos with my fancy camera, but now my camera is almost exclusively a thing I use on holiday, and then I have to balance between being with my family and taking good photos… being with the family wins, so my travel photos are, meh.

Confusion also suffered. The pace of posting slowed way down when my kids were born. Etc. etc. Focusing on the kids, even if it meant doing less of my hobbies makes sense examined in the spirit of the quote: my kids are going to leave the biggest traces on my “memory, biography and identity.” This is why I had kids, while they don’t define me, they are integal to my definition of self.

In the past few years, as the girls have gotten older, I have had more time, and started to filling that time by returning to other hobbies. I read more (though you wouldn’t know that from Confusion, I have not posted many book reviews in the past decade, I have in fact read a lot in the past few years, mostly focused on working through the complete corpus of a number of writers short stories: Turgenev, Chekov, Nabokov, Hemingway, O’Conner and more.) Speaking of Confusion, I have posted more at a faster pace in the past few years. I spend more time going out with my friends, not just with my family. I’ve also watched a lot more movies and TV shows —that aren’t Disney or Pixar or the like— in the past few years.

All of this is in the service of doing more things I enjoy as my kids don’t take up so much of my time, and inevitably want to spend less of their time with me. In, short, I’ve been doing more things that make me happy. What I have not been doing is “hustling”; I don’t spend excessive hours working outside of “normal work hours”; something that I am proud of, especially as I transitioned to working at home when COVID lockdowns started and I still work from home more than 90 percent of the time. But I don’t feel work has invaded my home like some people do. I’m not a workaholic.

But it was not always so. In my first jobs I spent almost all my time working. I remember one stretch, working on a project, where I worked, in the office, more then 365 days straight; more than a year. I did not take a day off, I worked on every Saturday and Sunday, I worked on Christmas, Thanksgiving, New Year’s and every other holiday, no sick days. I arrived at the office before 7AM and left at 9, 10 or 11PM. Saturday morning my boss would call me at 8AM and ask me what time I was going to be in. “11”. It’s not that I didn’t do other things, I did meet up with friends, watch movies and TV and so forth. It’s just that these things were limited to just a few hours a week.

I didn’t resent my boss for this, as stressful and tiring as all this was, I was into it. At that time we were doing something that felt like it could really be impacting the world, something that would be part of my “memory, biography and identity”. In some small sense I was part of changing the world, a tiny part but I was there. We were part of the early adoption of the mobile phone, the rise of SMS. We were not responsible for it, but we played an important role in making SMS work in the US.

All of that work and hustle paid off, it was a key part of how I ended up in Singapore, met my wife, and so forth. So it was worth it, I enjoyed it. But as I moved on to other jobs, the sense of changing the world wasn’t there. Maybe it was just getting older, or perhaps it’s better to say, maybe it was youthful optimism that I thought I was ever helping to change the world… In any case, I decided work was not how I defined myself.

Many of the other people at this particular job worked long hours. I’m not sure anyone was a crazy as me during that project, at least not for that long but many people worked longs hours. It was a startup, and that was startup culture. But not everyone. There were a couple of people, two guys specifically, who were at their desk at 9AM and went home at 5 or, latest, 5:30PM. A lot of us made jokes about them. It was never mean spirited or anything but it was a running joke about their commitment to work.

Maybe the rest of us were all victims of American Capitalism, selling our souls to the company. Maybe it was these two guys who had it figured out. We seem to be in a bit of a backlash against the “work defines” you and “find fulfillment in your work” mentality of the 90’s and 2000’s. I any case I think that today I’m more like those two guys than the workaholic I was back then. It’s not that I’m some sort of strict 9-to-5’er, some days I start at 8, some days I end at 7. During specific period I work late nights, but only as the exception and when it’s justified.

I work in the modern tech-comany office, meaning video calls, emails and instant messaging. Work is what happens between these interruptions but these interruptions are also work. When I first moved to Singapore it was the height of the Blackberry craze. I had a crackberry, I fell asleep with it in my hand, waiting for then next email. And I still answer emails outside office hours, usually in a burst just before bed. But I reserved the right, and exercise the right, to not respond to email instantly; to not check and reply to office instance messaging, and don’t respond to meeting invites outside of my office hours. I may attend meetings if I think there is really a reason to be doing them outside of office hours; deadlines we can’t change or time zones meaning someone is going to be working overtime, etc. This drives the personal assistants of some of my VPs crazy, they are always chasing me by email and instant message to make sure I “accept the meeting” but I reserve the right to not respond to or attend any meeting set outside my work hours. I can get away from this because I get my work done, I do attend those things that are truly important and I do quality work. I guess not everyone has the privilege to push back against work-creep in this way, so I’m grateful I can.

I’m rambling a bit. All of this is to say that this quote struck me, I like it. It’s similar to, but deeper than, “don’t sweat the small stuff”. I think people should take it to heart. If work make you happy then by all means devote yourself to it, and most of us need to work to feed and cloth ourselves, but forget to spend time on those things that will leave traces in your memory, biography, and identity.

Categories
ranting

2023 Recap

We are 11 days into 2024, one more trip around the sun for the Earth and one more for me —I’m 46 today. So, I thought it would be a good time to reflect on 2023.

Personally, 2023 can be summed up as “steady as she (he? they?) goes”:

Still live in Singapore – 19 years now

Still married – 17 years now

Still have two healthy growing daughters – 15 and 11

Still work at the same place – 13 years now

The one thing that was a bit of a downer was that my Grandmother, my mother’s mother, died late last year. She will be missed. She was 94, she didn’t suffer from any major physical or mental health issues, she raised 6 kids with her husband of more than 70 years. She had a good life.

On the positive side of things, I had two great holidays last year: I spent three weeks in Italy with Candice, Victoria and Olivia, and my mom and Sister Sarah joined us. And I spent a long weekend in Vietnam with R████ to celebrate his 50th birthday along with J████, R███ friends for work and another friend of R████’s, S██. Both trips were amazing.

And since this is a blog – I posted 25 times last year. A decent pace. Will try to keep that up.

Looking forward, this year will be a big —and stressful— year for Victoria and Olivia who will take their O-Level and PSLE respectively.

Categories
quotes ranting

The A.I. Bubble and Life-Changing Use Cases

A.I. can write book reviews no one reads of A.I. novels no one buys, generate playlists no one listens to of A.I. songs no one hears, and create A.I. images no one looks at for websites no one visits.

This seems to be the future A.I. promises. Endless content generated by robots, enjoyed by no one, clogging up everything, and wasting everyone’s time.

Lincoln Michel, in The Year that A.I Came for Culture [newreplublic.com], published in The New Republic

Will A.I. turn the internet into a virtual version of the Earth in Wall-E? Abandoned under so much garbage we have to evacuate? Some army of little A.I.’s left behind to cleanup the shit —to correct the hallucinations, remove the copyright infringing generated images and text?

Maybe not. 2023 was the year of A.I.. It’s been building for some time, with AI/ML being a term discussed endlessly at work for a few years. But in 2023, with the launch of ChatGPT (actually in November 2022 with GPT 3.5), A.I. went from something on the periphery of everyday life —something you heard about in passing news stories about tech— to an inescapable monster, on the lips of everyone, from tech CEOs to cultural commentators. A.I. managed to hold our collective attention in a way few things can in our hyperactive zeitgeist.

I don’t agree with everything Lincoln Michel discuses in the New Republic article but there was a line that caught my eye:

A.I. costs lots of money, and once investors stop subsidizing its use, A.I.—or at least quality A.I.—may prove cost-prohibitive for most tasks.

This caught my eye because I just read an article by Cory Doctorow the other day in Locus. In Cory Doctorow: What Kind of Bubble is AI [locusmag.com] Doctorow addresses the the same idea, that the cost of A.I. is it’s Achilles heel:

The universe of low-stakes, high-dollar applications for AI is so small that I can’t think of anything that belongs in it.

Basically Doctorow says that A.I. is unlikely to be able to fully replace a human for high-stakes applications —think doctors. A.I. can help speed up the process but do you trust an A.I. to make a life-or-death medical decision without human review? And if the A.I. is augmenting the human and not replacing it the economics don’t seem to work given the cost of running the most advanced A.I. models.

Of course A.I. is still getting better. So maybe in a few years it will get to the point that we all trust it to make life-or-death decisions for us without a “human in the loop”. Maybe people born after the LLM revolution will grow up just trusting A.I. more than those born before.

A.I. researches are looking at how to bring the cost down without sacrificing the quality of their models. Apple is working on models that will run on an iPhone rather than whole data centers. After all, technologies march to shrink it’s size and power consumption while increasing it’s power is incredible:

In 1978, the Cray 1 supercomputer cost $7 Million, weighed 10,500 pounds and had a 115 kilowatt power supply. It was, by far, the fastest computer in the world. The Raspberry Pi costs around $70 (CPU board, case, power supply, SD card), weighs a few ounces, uses a 5 watt power supply and is more than 4.5 times faster than the Cray 1.

Roy Longbottom, in Cray 1 Supercomputer Performance Comparisons With Home Computers Phones and Tablets [roylongbottom.org.uk]

How long will it take today’s data center sized A.I. models to run in my pocket or on my wrist? And by then will the data center sized models be trustworthy enough to handle the high-stakes —fault intolerant—, high-dollar applications?

Who knows. I’ll leave predicting to future to the Sci-Fi authors like Doctorow.

But, I think Doctorow’s point is good; the current hype around A.I. is a bubble. What will be left when it bursts? Will, as Lincoln Michel wonders, A.I. technology plateau as so many other technology promises have, forever just a few years away?

I’m personally skeptical of truly life changing things coming from the current bubble, small incremental improvements, sure, endless annoying low-stakes use cases, absolutely, but truly life changing? I remain skeptical.

Though, the ability for ChatGPT to answer all your homework problems will force education to change, but schools adapted to pocket calculator and then graphing calculators, they will adapt to A.I. bots in the kids pockets too.

It should also be noted that the current hype cycle is over Generative A.I., which is only one branch of A.I.. Some life changing A.I. applications are already out there, like IBM’s A.I. for folding proteins. I think this is the kind of A.I. that will truly change the world in the long run, helping scientists to push medicine forward rather than spitting out clickbate content to game the online advertising world.

Categories
quotes ranting

The Artists and the AIs

Would I forbid the teaching (if that is the word) of my stories to computers? Not even if I could. I might as well be King Canute, forbidding the tide to come in. Or a Luddite trying to stop industrial progress by hammering a steam loom to pieces.

Stephen King, from Stephen King: My Books Were Used to Train AI [theatlantic.com] published by The Atlantic.

I read that back in August and it’s stuck with me. I’ve had several conversations about this topic with various people. There are a lot of people out there who are railing against the use of their work in training AI. Screaming everything from plagiarism to copyright infrengement.

On the one hand I can understand the fear and anger. If you spend your time to create something and you hope to make your living from that creation and subsequent creations that may depend on consumers liking your style… then the idea that your style can be mimicked, cheaply and quickly, by an AI is an existential threat.

But, here’s the thing. Is an silicon-and-copper AI, trained on the works of any artist, living or dead, copyrighted material or not, and then asked to mimic the style of said artist, any different than a flesh-and-blood artist who spends time studying the works of another artist to mimic the style because they like it or because people will buy works in that style? I’m not sure I see a significant difference.

I remember spending hours in the art museums in DC and London and seeing artists sitting there with their notebooks or, sometimes, even an easel setup copying the works. They would sketch or paint in full or in part, a copy of a work hanging in the museum. They were literally copying the work of another artist. I assume they were learning.

Of course these studies can’t be sold, I guess, if the original work is still in copyright. If the original work is out of copyright, in the public domain, the then anyone can sell a copy. There is a whole industry creating copies of famous paintings, as accurately as possible, to sell to people (or companies) who want a copy. Check out this story about the village in China that turns out endless copies of famous paintings: On the Ground: Van Gogh lives here. So does Rembrandt. A Chinese village where the great masters live on (in replication) [latimes.com]. But copies for selling aren’t my point; copying for the sake of learning, of training the artist to be able to reproduce the style or incorporate the style into their own works is the point.

There is a whole industry of artist out there creating works that are “in the style of”. See this story on the work or a Disney artist who did Star Wars in the style of Calvin and Hobbes [theverge.com], I love it, I love both Star Wars and Calvin and Hobbes and as this falls under the rules I parody, I’m total down to buy a tee shirt of this. Or, go search Etsy, Redbubble, or DeviantArt for “Calvin and Hobbes” for examples of artists drawing the famous duo in all manner of styles. Or search DeviantArt for “Studio Ghibli” and see the plethora of works that are either characters or scenes from Studio Ghibli works done in other styles or the works of others done in the style of Ghibli Studio.

If our art is popular people on the internet are going to mimic it. And a lot of them are going to sell it.

All of this is to say that training an artist —a physical human or a digital AI— on existing art is fine. Copyright rules are supposed to protect the ability of the original artist to make a living while allowing others to produce derivative works (copyright has many issues, including abuse by individuals and corporations to stifle legitimate derivative works just because they have the money to hire better lawyers, but that is beyond the scope of this post, my point is there is a system). If I ask an AI to write me a story in the style of Steven King and it is not literally reproducing large swaths of text from King’s works then how is that different from anyone inspired by Steven King writing stories? Remember what Picasso Bansky said: Good artists copy. Great artists steal.

IMG_8216

I recently saw a post of AI created “Disney Princesses in the style of Studio Ghibli” making the rounds on social media. That’s an old trope, what’s new is that they did it with AI, the output was good, and they used that output to create a YouTube video, with AI voice over, that they could post and make money on ads.

And I think his issue of not paying someone to create the art, but going cheap on an AI so that they can make more profit by pushing out more content to social media is the real issue. In the end the issue is that commerce trumps creativity; and if you are trying to make a living from your creativity commerce and the AI is going to destroy you.

The ease and speed with which the AI can pump out the Disney-Ghibli princesses means more profit for the person who posts the video than if they had to pay an actual artist to do the work. It’s the age old issue of capitalism valuing art only in so far as it makes money. Profitability trumps artistic vision. Nothing new there. Artists will need to adapt, it’s never been an easy living, and throwing their sabots at the AIs won’t stop it.

The more industry is able to use AI to replace actual human creativity the more our culture will suffer. Art, in all its forms, more than anything else is what makes us human. Since the first humans pressed their hands to a rock and spit ochre to leave their mark art has been a defining characteristic of humanity. It would be a great pity if the AI drove human artists to extinction.

I have purchased a fair bit of actual art from internet artists over the years. I have a cabinet full of posters, books and figurines by various creators: A Lesson is Learned by the Damage is Irreversible [alessonislearned.com], Josh Cooley and Gaping Void [gapingvoid.com] (which the artist seems to have turned into a successful consulting business, but they were selling prints of their art back in 2010), ARt from these three is in the featured image of this post, as long ago I downloaded images of what I bought to create a layout for my wall, but I never got them up. I’ve also supported Dresden Codak [dresdencodak.com], Little Gamers [little-gamers.com], CHAKAL666, Steve Bailik, and many more. Most seem to be offline these days. I wish I had room to display all the art I have, but most of it is in a boxes or poster tubes.

Would I buy AI art? I don’t know, right now I don’t see anything that I think is so amazing and honestly the money is going to who? Most AI art seems to be clickbait. Prompt engineering is a skill, just go and take a look at the prompts that are being used to generate images with Midjourny or Dall-E. The people generating good AI art are working at it, though it might not be compariable to the work of an artists who spent years developing the muscle memory and eye for painting.

Anyway, support culture, support artists. Buy art from people. But, avoid things that are obvious copyright violations. Parody is your friend.

Categories
quotes ranting

Computers are only capable of calculation, not judgement

[J]udgment involves choices that are guided by values. These values are acquired through the course of our life experience and are necessarily qualitative: they cannot be captured in code. Calculation, by contrast, is quantitative. It uses a technical calculus to arrive at a decision. Computers are only capable of calculation, not judgment. This is because they are not human, which is to say, they do not have a human history – they were not born to mothers, they did not have a childhood, they do not inhabit human bodies or possess a human psyche with a human unconscious – and so do not have the basis from which to form values.

Ben Tarnoff, in ‘A certain danger lurks there’: how the inventor of the first chat bot turned against AI [theguardian.com], published in The Guardian

A timely article in The Guardian about the life an work of Joseph Weizenbaum, the author of the original Eliza program. Eliza was the first AI for prominence, if not the first, and despite how simple a program Eliza was when we look back from the likes of ChatGPT, it managed to show a fundamental issue: anthropomorphization.

The quote above is part of the article’s summary of Weizenbaum’s book Computer Power and Human Reason. A key tenant of the book is Weizenbaum’s belief that humans are too fast to anthropomorphize AI, to assign human characteristics, especially intelligence to a mere program, a deterministic bit of code written to mimic intelligence. Weizenbaum’s argument is that, a program, no matter how cleaver the programmer, no matter how good the program, can never truly be human. Humans have experiences, that are qualitative and aspect that can never be learned from mere information by a computer that is quantitate in it’s very nature. You can teach a computer the definition of love or heartbreak, but a computer can never experience it. Weizenbaum argues that only a human can, and should, make judgements which necessarily require qualitative experience, while an AI can only ever make computations. He argues that because of this fundamental difference, there are things that computers should not be allowed to do. Or, as Weizenbaum puts it in the introduction to Computer Power and Human Reason, there are limits to what computers ought to be put to do.

The future that Weizenbaum feared where we outsource judgement to AI has already come to pass in some specific instances. The Guardian article links to a Brookings Institution article [Brookings.edu] on the “widespread use [of algorithmic tools] across the criminal justice system today”. The Bookings Institution describes how computer programs are used to assign risk scores to inmates up for parole, or identify likely crime locations —hello Minority Report— and, of course, the use of facial recognition. While there is still a “human in the loop” in the decision making process today its easy to imagine people, including judges or police, just trusting the machine and in-effect outsourcing their judgement to the AI rather than just the computational tasks.

The Guardian articles is worth the read, it’s more a character study of Weizenbaum and his trajectory from creating Eliza to arguing about the potential pitfalls of AI. The promise of AI, in Weizenbaum’s lifetime faded, as it became apparent that the super intelligent programs promised was out of reach. Both the public and the academic world —and those funding them, like the military-industrial complex— moved on. It’s worth revisiting Weizenbaum’s work in our new hype cycle for AI.

Reading the article reminded me of a paper I wrote in college. AI was one of the subjects I focused on as part of my computer science degree. I built neural networks, language models, and much more. I experienced first hand how easy it was to make something that, at first glance, seemed to embody some intelligence only to find that it became quickly apparent that this was an illusion and that actual intelligence was far away, and seemingly unachievable on the resources of the day. But this was the early days of the internet, more than two decades later the world is different. In the midst of my studies I wrote a paper I titled “Will I Dream” referencing the question that HAL 9000 asked at the end of 2010: Odyssey Two. It was not really a technical paper, more a historical discussion on the dream of achieving “human like intelligence” in a program. I covered ELIZA and several of her descendent, PARRY, the paranoid program, SAM, and others.

I don’t have an electronic copy of the paper anymore, sadly it was on a hard drive that died long ago without a backup. I do have a single print out of it. Maybe I’ll transcribe it and post it here, as outdated as it is.

Looking back at my paper I’m reminded how long the AI journey has been, how the use cases that I covered in 2000, already old then are new again. A good example is FRUMP, a program written in the late 1970’s to read and summarize news stories. Similar to how GPT can be used today to summarize a website. The old goals are the new goals and the hype is reaching a fever pitch again. Will we be disappointed again? Will AI change the world in some fundamental way or will it’s promise fade? Will it be a tool but never the craftsman? Or will it take overall our jobs? Is AI an existential threat or just an amusement, a distraction from the actual existential threats we refuse to face? Meta released a music making AI recently, maybe it can generate a soundtrack for us while the world burns.