Categories
quotes

Odds & Ends

A collection of quotes I’ve stumbled across and wanted to write something about or share but never finished a proper post. Collected here with half finished or no commentary…


I have the most devoted and ardent of friends, and affectionate relatives — and of enemies I really make no account.

Walt Whitman, in a letter to a German friend on his sixty fourth birthday

Words to live by? Contrast with “keep your friends close but your enemies closer,” from The Godfather Part II.


Fluidity of memory and a capacity to forget is perhaps the most haunting trait of our species.

Wade Davis in The Unraveling of America [rollingstone.com], published in Rolling Stone

The whole article is worth reading, depressing as it is. But if Americans don’t start to internalize the decline of America and its ideals, replaced as they have become by quick and catchy political slogans and dogma packaged as identity then nothing will change and America is doomed as an idea if not a country.

Someone once told me that the founding documents of America, the. Declaration of Independence, the Federalist Papers, and the Constitution with the Bill of Rights, are some of the most enlightened and important documents in the history of humanity and human thought. A bit of hyperbole but point taken. He also said that America’s history is an ongoing experiment to see if humanity can live up to such lofty ideals. It took us “four score and seven years” to even begin to live up to the first line of the Declaration, that “all men are create equal” and as we approach 250 years we are still struggling with that one.

Progress has been slow and fitful, but there has been progress. But somehow there is always a significant number of Americans that fall back on their tribalism and unite in their fear of the other. It’s a sad irony that those who publicly shout about the greatness and sanctity of the Constitution are the ones who seem to be the furthest for living up to ideals on which it is based.


In competitive athletics you play the best you can on that day, and sometimes you lose and sometimes you win. It is best to not be overly attached to the final result if you want to play your best…

Richard Geib, posted in “Mushin” – A Legacy to My Daughter [rjgeib.com] on rjgeib.com

[W]e judge the sophistication of our peers by how sophisticated they are with use of language. Your smartest friends can use deadpan sarcasm, and your smartest friends can get it when you’re deadpanning sarcasm

Stanley Dubinsky, quoted in “The Dad-Joke Doctrine” [theatlantic.com], pubished by The Atlantic

The article is from 2018 but it showed up in a list of Fathers Day related articles this year. It’s interesting giving some theories on why the dad joke exists, how there is analogous ‘traditions’ in some other cultures and languages. Serious lack of dad jokes in the article through. And the few they do give just re-enforce the “dad-jokes = bad-jokes” stereotype.

Anyway, it’s not the dad jokes that this quote is about, it’s about the use of complex linguistics like sarcasm, or puns. I love sarcasm and puns. But living in Singapore I think a lot of my sarcasm and puns go over most peoples heads. To the point that I use a lot less of them than I used to, though sometimes I can’t help myself, the immediate response or comment to a what someone says just comes out without conscious thought, it has to for it to work, if you wait five minutes to make a sarcastic remark or come up with a pun as part of your response then there is no point.

The issue is mostly because English is not most people’s first or primary language. Even when it is the primary language for a lot of Singaporeans, taught in school as the main language, the proficiency and vocabulary is not strong enough to handle more complex puns and sarcasm. There is just too high a proportion of non-native speakers you talk to in daily life, daily interactions demand simpler language. People just don’t use puns and sarcasm so much.

I feel like many snappy retorts and come-backs are lost on most people I speak with, they just don’t immediately get the joke.


My mother is very religious so I’m very much aware of the attitude that these are the last days. But, let’s face it, no matter where we have been in history, whoever has existed has been living in the last days… their own. When each of us dies the world ends for us.

Octavia E. Butler, in Octavia E. Butler: The Last Interview and Other Conversations

I’ve never read Octavia Butler, always on the list, never at the top. But I like this quote I found in a post on The Marginalian [themarginalian.org].


Expressing oneself in the world and creativity are the same. It may not be possible to know who you are without somehow expressing it.

Rick Ruben, in The Creative Act

I’ve always had to desire to create, but never the drive to truly create. I play at it but I don’t put in the hours to be a truly creative person.

For example, I’ve been doing photography for more than 25 years, but I’ve never tried to truly learn to be a photographer. I never took a class or read a book. I’ve googled my way to solving specific problems or copying a cool effect I see. I’ve absorbed a lot of lingo and I can control my camera fairly well. Even enough to master full manual mode.

I use Lightroom to clean up my photo. And it comes with photoshop but I’ve rarely opened photoshop. The closest to creativity with images I get is using Adobe’s Express to make the featured images on my posts. I am happy with most of those, especially the ones I did for my series of posts covering my best mobile photos [confusion.cc] year by year since 2004.

Categories
quotes ranting

The Artists and the AIs

Would I forbid the teaching (if that is the word) of my stories to computers? Not even if I could. I might as well be King Canute, forbidding the tide to come in. Or a Luddite trying to stop industrial progress by hammering a steam loom to pieces.

Stephen King, from Stephen King: My Books Were Used to Train AI [theatlantic.com] published by The Atlantic.

I read that back in August and it’s stuck with me. I’ve had several conversations about this topic with various people. There are a lot of people out there who are railing against the use of their work in training AI. Screaming everything from plagiarism to copyright infrengement.

On the one hand I can understand the fear and anger. If you spend your time to create something and you hope to make your living from that creation and subsequent creations that may depend on consumers liking your style… then the idea that your style can be mimicked, cheaply and quickly, by an AI is an existential threat.

But, here’s the thing. Is an silicon-and-copper AI, trained on the works of any artist, living or dead, copyrighted material or not, and then asked to mimic the style of said artist, any different than a flesh-and-blood artist who spends time studying the works of another artist to mimic the style because they like it or because people will buy works in that style? I’m not sure I see a significant difference.

I remember spending hours in the art museums in DC and London and seeing artists sitting there with their notebooks or, sometimes, even an easel setup copying the works. They would sketch or paint in full or in part, a copy of a work hanging in the museum. They were literally copying the work of another artist. I assume they were learning.

Of course these studies can’t be sold, I guess, if the original work is still in copyright. If the original work is out of copyright, in the public domain, the then anyone can sell a copy. There is a whole industry creating copies of famous paintings, as accurately as possible, to sell to people (or companies) who want a copy. Check out this story about the village in China that turns out endless copies of famous paintings: On the Ground: Van Gogh lives here. So does Rembrandt. A Chinese village where the great masters live on (in replication) [latimes.com]. But copies for selling aren’t my point; copying for the sake of learning, of training the artist to be able to reproduce the style or incorporate the style into their own works is the point.

There is a whole industry of artist out there creating works that are “in the style of”. See this story on the work or a Disney artist who did Star Wars in the style of Calvin and Hobbes [theverge.com], I love it, I love both Star Wars and Calvin and Hobbes and as this falls under the rules I parody, I’m total down to buy a tee shirt of this. Or, go search Etsy, Redbubble, or DeviantArt for “Calvin and Hobbes” for examples of artists drawing the famous duo in all manner of styles. Or search DeviantArt for “Studio Ghibli” and see the plethora of works that are either characters or scenes from Studio Ghibli works done in other styles or the works of others done in the style of Ghibli Studio.

If our art is popular people on the internet are going to mimic it. And a lot of them are going to sell it.

All of this is to say that training an artist —a physical human or a digital AI— on existing art is fine. Copyright rules are supposed to protect the ability of the original artist to make a living while allowing others to produce derivative works (copyright has many issues, including abuse by individuals and corporations to stifle legitimate derivative works just because they have the money to hire better lawyers, but that is beyond the scope of this post, my point is there is a system). If I ask an AI to write me a story in the style of Steven King and it is not literally reproducing large swaths of text from King’s works then how is that different from anyone inspired by Steven King writing stories? Remember what Picasso Bansky said: Good artists copy. Great artists steal.

IMG_8216

I recently saw a post of AI created “Disney Princesses in the style of Studio Ghibli” making the rounds on social media. That’s an old trope, what’s new is that they did it with AI, the output was good, and they used that output to create a YouTube video, with AI voice over, that they could post and make money on ads.

And I think his issue of not paying someone to create the art, but going cheap on an AI so that they can make more profit by pushing out more content to social media is the real issue. In the end the issue is that commerce trumps creativity; and if you are trying to make a living from your creativity commerce and the AI is going to destroy you.

The ease and speed with which the AI can pump out the Disney-Ghibli princesses means more profit for the person who posts the video than if they had to pay an actual artist to do the work. It’s the age old issue of capitalism valuing art only in so far as it makes money. Profitability trumps artistic vision. Nothing new there. Artists will need to adapt, it’s never been an easy living, and throwing their sabots at the AIs won’t stop it.

The more industry is able to use AI to replace actual human creativity the more our culture will suffer. Art, in all its forms, more than anything else is what makes us human. Since the first humans pressed their hands to a rock and spit ochre to leave their mark art has been a defining characteristic of humanity. It would be a great pity if the AI drove human artists to extinction.

I have purchased a fair bit of actual art from internet artists over the years. I have a cabinet full of posters, books and figurines by various creators: A Lesson is Learned by the Damage is Irreversible [alessonislearned.com], Josh Cooley and Gaping Void [gapingvoid.com] (which the artist seems to have turned into a successful consulting business, but they were selling prints of their art back in 2010), ARt from these three is in the featured image of this post, as long ago I downloaded images of what I bought to create a layout for my wall, but I never got them up. I’ve also supported Dresden Codak [dresdencodak.com], Little Gamers [little-gamers.com], CHAKAL666, Steve Bailik, and many more. Most seem to be offline these days. I wish I had room to display all the art I have, but most of it is in a boxes or poster tubes.

Would I buy AI art? I don’t know, right now I don’t see anything that I think is so amazing and honestly the money is going to who? Most AI art seems to be clickbait. Prompt engineering is a skill, just go and take a look at the prompts that are being used to generate images with Midjourny or Dall-E. The people generating good AI art are working at it, though it might not be compariable to the work of an artists who spent years developing the muscle memory and eye for painting.

Anyway, support culture, support artists. Buy art from people. But, avoid things that are obvious copyright violations. Parody is your friend.

Categories
quotes

Capable of Monstrous Acts

The most discomposing thing about people capable of monstrous acts is that they too enjoy art, they too read to their children, they too can be moved to tears by music.

Maria Popova, in Terror, Tenderness, and the Paradoxes of Human Nature: How a Marmoset Saved Leonard and Virginia Woolf’s Lives from the Nazis [themarginalian.org] on The Marginalian

Categories
quotes ranting

Computers are only capable of calculation, not judgement

[J]udgment involves choices that are guided by values. These values are acquired through the course of our life experience and are necessarily qualitative: they cannot be captured in code. Calculation, by contrast, is quantitative. It uses a technical calculus to arrive at a decision. Computers are only capable of calculation, not judgment. This is because they are not human, which is to say, they do not have a human history – they were not born to mothers, they did not have a childhood, they do not inhabit human bodies or possess a human psyche with a human unconscious – and so do not have the basis from which to form values.

Ben Tarnoff, in ‘A certain danger lurks there’: how the inventor of the first chat bot turned against AI [theguardian.com], published in The Guardian

A timely article in The Guardian about the life an work of Joseph Weizenbaum, the author of the original Eliza program. Eliza was the first AI for prominence, if not the first, and despite how simple a program Eliza was when we look back from the likes of ChatGPT, it managed to show a fundamental issue: anthropomorphization.

The quote above is part of the article’s summary of Weizenbaum’s book Computer Power and Human Reason. A key tenant of the book is Weizenbaum’s belief that humans are too fast to anthropomorphize AI, to assign human characteristics, especially intelligence to a mere program, a deterministic bit of code written to mimic intelligence. Weizenbaum’s argument is that, a program, no matter how cleaver the programmer, no matter how good the program, can never truly be human. Humans have experiences, that are qualitative and aspect that can never be learned from mere information by a computer that is quantitate in it’s very nature. You can teach a computer the definition of love or heartbreak, but a computer can never experience it. Weizenbaum argues that only a human can, and should, make judgements which necessarily require qualitative experience, while an AI can only ever make computations. He argues that because of this fundamental difference, there are things that computers should not be allowed to do. Or, as Weizenbaum puts it in the introduction to Computer Power and Human Reason, there are limits to what computers ought to be put to do.

The future that Weizenbaum feared where we outsource judgement to AI has already come to pass in some specific instances. The Guardian article links to a Brookings Institution article [Brookings.edu] on the “widespread use [of algorithmic tools] across the criminal justice system today”. The Bookings Institution describes how computer programs are used to assign risk scores to inmates up for parole, or identify likely crime locations —hello Minority Report— and, of course, the use of facial recognition. While there is still a “human in the loop” in the decision making process today its easy to imagine people, including judges or police, just trusting the machine and in-effect outsourcing their judgement to the AI rather than just the computational tasks.

The Guardian articles is worth the read, it’s more a character study of Weizenbaum and his trajectory from creating Eliza to arguing about the potential pitfalls of AI. The promise of AI, in Weizenbaum’s lifetime faded, as it became apparent that the super intelligent programs promised was out of reach. Both the public and the academic world —and those funding them, like the military-industrial complex— moved on. It’s worth revisiting Weizenbaum’s work in our new hype cycle for AI.

Reading the article reminded me of a paper I wrote in college. AI was one of the subjects I focused on as part of my computer science degree. I built neural networks, language models, and much more. I experienced first hand how easy it was to make something that, at first glance, seemed to embody some intelligence only to find that it became quickly apparent that this was an illusion and that actual intelligence was far away, and seemingly unachievable on the resources of the day. But this was the early days of the internet, more than two decades later the world is different. In the midst of my studies I wrote a paper I titled “Will I Dream” referencing the question that HAL 9000 asked at the end of 2010: Odyssey Two. It was not really a technical paper, more a historical discussion on the dream of achieving “human like intelligence” in a program. I covered ELIZA and several of her descendent, PARRY, the paranoid program, SAM, and others.

I don’t have an electronic copy of the paper anymore, sadly it was on a hard drive that died long ago without a backup. I do have a single print out of it. Maybe I’ll transcribe it and post it here, as outdated as it is.

Looking back at my paper I’m reminded how long the AI journey has been, how the use cases that I covered in 2000, already old then are new again. A good example is FRUMP, a program written in the late 1970’s to read and summarize news stories. Similar to how GPT can be used today to summarize a website. The old goals are the new goals and the hype is reaching a fever pitch again. Will we be disappointed again? Will AI change the world in some fundamental way or will it’s promise fade? Will it be a tool but never the craftsman? Or will it take overall our jobs? Is AI an existential threat or just an amusement, a distraction from the actual existential threats we refuse to face? Meta released a music making AI recently, maybe it can generate a soundtrack for us while the world burns.

Categories
quotes ranting

AI and Digital Mad Cow Disease

Machine entities who have as much, or more, intelligence as human beings and who have the same emotional potentialities in their personalities as human beings… Most computer theorists believe that once you have a computer which is more intelligent than man and capable of learning by experience, it’s inevitable that it will develop an equivalent range of emotional reactions — fear, love, hate, envy, etc.”

Stanley Kubrick, quoted in Stanley Kubrik’s astute prediction regarding the future of AI [faroutmagazine.co.uk] published on Far Out magazine

If Stanley Kubrick was “right” about AI… and they are referencing 2001 (not A.I. Artificial Intelligence), then please let’s stop AI now. It didn’t end well for Dave or Frank or anyone else on the Discovery.

But I think we are a long way from AIs “learning by experience”. Today there is a iterative process to training the current batch of AIs, but it’s humans who are still learning what needs to be improved and making adjustments in the system based on the experience. It is, in fact, what and how they learn that I think is going to be the problem. Maybe in fixing that we will make a true AI like all those AIs in scifi, one that can truly learn from experiance, and then it may well be that it goes down dark path just like HAL. Like Ultron, like Skynet and many more.

Today, we do seem to be running full steam into a risk of a different kind; not a true thinking machine that decides to destroy us, but a downward spiral where we are undone by AIs that are not actually intelligent, but are tricking us with what amounts to mind reading parlor tricks turned up to 11 with massive amounts of computing power. The AIs that are in the news are just menatlists predicting that you are thinking of a gray elephant from Denmark. They make the decision not based on carefully constructed series of questions that will lead almost everyone to think of a gray elephant from Denmark, but it’s the same principle, the same tool set; statistics, that allows them to answer any question you ask. They only decide based on thier statistics what the next letter, or word, or phrase —the next ‘token’— of the output should. And of couse the statistics are very complex, ‘h’ does not always follow ‘t’ and then ‘e’ sometimes, given the context of the input and where the AI is in the output ‘e’, then ‘x’ and then ‘t’ could follow ‘t’.

It’s how the statistics the AI relies on are created and how we are starting to use these AIs that creates the issue, the issue of feedback loops. The current batch of headline stealing AIs, based on Large Language Models (LLMs) are trained on content scraped from the internet. The same way that Google or Bing scrape the internet and buld ‘indexes’ that allow you to find the particular needle you are looking for. By sucking down all the text they can from the internet and processing it to build their indexes, search enginers can quickly match your input in the search box and give you the results. Over the years much work has gone into adding things to the ‘algorithm’ that creats the index and processes your input to make the reslts better. Fixing spelling mistakes and typos, looking for related terms, raking results based on how many other sites link to the result, etc. AI is doing a similar thing, taking your input and returning results, but rather than returing a web page, the AIs are constructing ‘new’ output based on the statistics they calcualted from their web scraping. The fatal feedback will come as people —and companies— start to use AIs to generate the very content that makes up the internet. The AIs will start to eat themselve and their kid. AI canibalism.

People already have a massive issue with identifying correct information on the internet. Social media is a dumpster fire of self proclaimed experts who are, at best misguided and delusional, and at worst deceitful and nefarious. LLMs trained on this poor quality data may learn internet colloquial syntax and vocabulary, they may be able to speak well and sound like they know what they are talking about, but they are not learning any other subject. They are not able to understand right from wrong, incorrect from correct, they didn’t study medicine or history, only the structure of langauge on the internet. The vast size of the models and the volume of training data and the clever tricks of the researchers and developers impress us, but it’s just a better mentalist. LLMs have only a statistical reasoning of what to say not any understanding, or knowledge of why it should be said. Ironically what the AIs actually lack is intelligence.

This is quickly becoming a problem as people and companies embrace LLMs to generate content faster a cheaper, to drive traffic to their websites and earn advertising money. Without human experts in the loop to review and revise the AIs output you end up with hallucinations, or “a confident response by an AI that does not seem to be justified by it s training data,” as Wikipedia [wikipedia.org] explains. Wikipedia also gives an example: “a hallucinating chatbot might, when asked to generate a financial report for Tesla, falsely state that Tesla’s revenue was $13.6 billion (or some other random number apparently “plucked from thin air”).”

Again, the problem is that the LLM lacks any knowledge or understanding of the subject, it can’t analyze the question or it’s own answer except from the point of view that, statistically based on its training data it should output the tokens “13.6 billion” in this situation. It’s amazing how much they do get right, how lucid they seem. This is down to their size and complexity. But if people blindly accept the AIs output and post these hallucinations to the internet —even to point out they are wrong as I’m doing— then the next batch of training data will be polluted with yet more inaccurate data and over time the whole model may start to be overwhelmed by some sort of delirium, a digital mad cow disease.

Mad Cow Disease [wikipedia.org] was caused by caused by (or really spread by) feeding cows the remains of their own dead to save money, to reuse the parts of the cow with no other use, it was ground it up and added it to cow feed. This allowed, it seems, a random harmful mutation in a prion, maybe only in a single cow, to be ingested by more cows who were, in turn, ground up and the cycle repeated and the disease spread. Now the LLMs will be fed their own AIs output and the output of competing AIs in the next training cycle. So a hallucination from one AI makes it’s way, like a defective prion, into the model of another AI to be regurgitate, and the cycle repeats allowing this digital mad cow disease to spread. And like real world mad cow disease humans who digest this infected content may come down with a digital analog of Variant Creutzfeldt–Jakob Disease (vCJD) [wikipedia.org]. Let’s hope digital vCJD is not irreverable and 100% fatal like it’s namesake.

Maybe the people who design these systems will figure out how to inject some sort of fact checking and guidelines. But who decides what us write when we can’t even agree on facts? The internet is filled with bigots and conspiracy theorists, well meaning idiots and scam artists, trolls and shills. How will AIs be any different? Maybe we will find a way to regulate things so AIs can’t be fed their own verbal diarrhea but who decides what is right and wrong? Who decides what is morally or socially acceptable?

This post is a good example of the problem, should LLMs use it as training data, it’s pure opinion by someone unqualified, I studied the principles of AI and built neural networks back in college, but I’m not an expert, the content of this post could be useful to an AI answering a question about the perception of the problem among the general public or their concerns, but it should not be used as factual or confused with expert opinion, it should not be used to answer a question about “what are the risks of training AIs on data scraped from the internet”. How does and AI trained on the internet know the difference? We seem to have jumped out of the plane without checking if we packed the parachute.

One note on the article where I got the quote: Intake issue with the fact that it talks about the all effort the Kubrick put in to make 2001 as accurate as possible, speaking with experts for “countless hours”, but it fails to mention that the 2001 was co-written with an actual scientist – Arthur C. Clark.