Categories
photography travel

Paris, France, November-December 2022

I’m setting a new record for delay in posting my travel photos. normally it takes me six months, this time it’s closer to ten. In my defense I had to replace my external storage and, twice, send my Mac for repair. But, anyway. Yea, I went to Paris last year with the family.

Since my first visit, more than twenty years ago, before this blog existed, I have loved Paris. Not the first of course, but I really feel a je ne sais quoi. Walking the streets, sitting in the cafes or visiting the museums. This feeling survives the rude people, the stink of the Metro, the homeless, and the bitter cold we had on this trip. London, New York and Tokyo are the only other cities that I have spent a significant amount of time in that have a similar sort of presence and mystic in my mind.

I’ve seen just about everything there is to see in Paris over my many visits, but this was my daughters first trip. So, we marched our way through all of the sites I think are worth it:

Arc de Triomphe

IMG_0884

The Arc [wikipedia.org] was our first stop. Seemingly every flight from Singapore to Europe lands at six AM and the hotels don’t want you until two or three in the afternoon. So after dropping our bags at our hotel in the Latin Quarter we hiked down to and across the river, and then up the Avenue des Champs-Élysées to the Arc. It was cold and windy and the sky was overcast, parils of winter travel, but the Arc is as good an introduction to Paris as any; a Napoleonic monument seated at the intersection of grand boulevards with views of the Eiffel Tower, Sacré-Cœur atop Montmartre, the hideous Tour Montparnasse. Notre Dame was hidden by the renovation works.

Tour Eiffel

You have to, the only reason not to visit is just to be contrarian, the Eiffel Tower [wikipeidia.org] is Paris, dispite the fact that the parisians hated it when it was first built. No one wanted to climb the stairs. There is a new (to me) glass wall that goes all the way around the base of the tower so they herd people through security screening. It ruins all the photos. C’est la vie.

Musée du Louvre

IMG_1809

We went to the Lourve [wikipedia.org], twice in fact. It’s much too big for one visit. We got really lucky on the first visit, it was on a Friday, they have extended hours and when we made our way to the Mona Lisa [wikipedia.org] there were surprisingly few people. Even with two visits the Louvre is overwhelming. We checked off the majors: Mona Lisa, Venus de Milo [wikipedia.org], Nike of Samothrace [wikipedia.org], Liberty Leading the People [wikipedia.org], the apartments of Napoleon III, and much more. So much more…

Musée de Cluny/Musée du Moyen Âge

The Moyen Âge is a smaller museum, less crowded. You feel like you can take your time. But really you go for one thing: ze tapestries. The Lady and The Unicorn [wikipedia.org], six large tapestries that are always linked in my mind with opening titles of The Last Unicorn [wikipedia.org], the 1982 Rankin/Bass animated movie. Though younger people may associate them with the Gryffindor common room in the Harry Potter movies.

Musée d’Orsay

IMG_3296

The Orsay [wikipedia.org] is my favorite musum in Paris. I love the collection, focusing on art from the late 19th and early 20th century. There is something about the transition from classical painting and sculpture to fully modern art that just works for me. I love the impressionist and post-impressionist; the Orsay has a huge collection: Monet [wikipeida.org], Van Gogh [wikipedia.org], Cézanne [wikipedia.0rg], Degas [wikipedia.org], and many more. I love also the sculpture of Rodin [wikipedia.org] his student Camille Claudel [wikipedia.org], and those of Carpeaux [wikipedia.org]. The Orsay is the right size, not as massive as the Lourve, not so small as the Cluny. A long lazy afternoon wondering among great art. This time there was a exhibit on the works of Edvard Munch [wikipedia.org], we got to see an early hand colored lithograph of The Scream [wikipedia.org].

Musée de l’Orangerie

The main attraction in the Oragnerie [wikipedia.org] is eight massive paintings by Cloude Monet in his Water Lilies [wikipeida.org] series. If you’ve never seen these or the other large format ones that are in other museums, you will be shocked at how large they are. While many of the Water Lilies in museams like the Orsay are ‘normal’ size, typically around 1 meter by 1 meter or so, the eight that hang in the Orangerie are two meters high and range in width from six to seventeen meters. The museum also houses many more other impressionist and post-impressionist paintings.

Musée Rodin

IMG_1286

Rodin [wikipedia.org] is my favorite sculpture (Dalí comes close), and the Rodin Museum [wikipedia.org] in Paris is wonderful place. A quiet garden and manor house that once housed Rodin’s studio, set not too far from the Eiffel Tower. It’s a great escape from the city without leaving the city. You can spend hours wondering around the garden and inside the house. Among hundreds of Rodin’s works; including The Thinker [wikipedia.org] and The Kiss [wikipedia.org] as well as a cast of the full The Gates of Hell [wikipedia.org] (both The Kiss and the Thinker were orginally part of the Gates).

Espace Dalí

Dali Paris [wikipedia.org], is a small private museum in Montmartre, devoted to Dalí. There are a number of casts of various images from his surrealest paintings —melting clocks from the Persistance of Memory [wikipedia.org], a long legged Space Elephant from The Elephants, Alice jumping rope and more. It’s small, but if you like Dalí it’s a great stop.

Sainte-Chapelle

IMG_1436

Standing in Sante-Chapelle [wikipedia.org], one a sunny day, the stained glass windows filling the room with all colors of the rainbow, is one of the most peaceful and beautiful experiences you can have. Of all the churches and other places filled with stained glass I’ve visited around Europe (and other places), there is nothing that compares with Sainte-Chapelle.

Sacré-Cœur

Sacré-Cœur [wikipedia.org] is beautiful building, a mix of muted orthodox churches —massive ceiling mosaics and almost onion domes— and the classical revival styles. Nothing gothic about it, but, while it is pretty, it doesn’t do it for me, I prefer the gothic architecture of Norte Dame. The best part of visiting Sacré-Cœur is going up to the dome and getting the view of Paris from the very top of Montmartre.

Shakespeare & Co

IMG_0998

There are other, larger, book stores just around the corner from Shakespeare & Co [wikipedia.org] —though sadly Gibert Jeune closed due to the COVID19 pandemic— but Shakespeare & Co’s focus on English books means I can actually read a book I buy there. And it just feels more cosy. The shops along Boulevard Saint-Michel are massive, Gibert Jeune was 6 stores and Gibert Joseph stretches across multiple locations. Shakespeare & Co is cosy, you can barely turn around in the used book shop. The new book shop is bigger, but still just five irregular shaped rooms, packed with shelves of books. I know this is not the original Shakespeare & Co that published Ulysses, that one closed during the Nazi occupation, but it has the ambiance. I picked up a used copy of Chaucer, The Pardoners Tale edited by Nevill Coghill and Christopher Tolkien.

Catacombes de Paris

“I see dead people”… Well, their bones. Bones everywhere. Millions of bones. I’ve been the theCatacombs of Paris [wikipedia.org] before, twice. This trip was all about taking my teenage daughter. She likes horror movies so this was right up her alley. My wife and younger daughter declined to join us, they went shopping and dinning.

Palais Garnier

IMG_2149

The Palais Garnier or Paris Opera House, is the very definition of over-the-top architecture. Wikipedia says it’s Second Empire or Napoleon III style, which is not technically baroque but includes many elements of baroque as a revival… But the basic principle seems to be leave no surface unadorned. Statues, carvings, mosaics, gilded mirrors… it’s all there. And somehow it works. Even if you don’t appreciate the aesthetic the exterior and, especially, the interior of the Palais Garnier are awesome and worth the visit. And don’t forget this is where the Phantom of the Opera lived.


In addition to all the sights we visited in the city, we made a few trips out to the surrounding areas. We went to Versailles [wikipedia.org] to see the opulent palace of the Sun King [wikipedia.org] and Marie-Antoinette [wikipedia.org]. Actually we had to make two trips, the first day we went we arrived late and the last tickets for the (short) day were sold out. Fortunately we had a few free days so were able to get tickets online for one of those days towards the end of our trip.

We also visited Chartres to see the cathedral [wikipedia.org]. The idea was to make up for not getting to see Notre Dame, since it was still under renovation and repair after the fire. You can’t go to France and not see a proper gothic cathedral. Unfortunately, Chartres is undergoing restoration and cleaning and the tour of the tower and upper floors was closed. C’est la vie. So we had to settle for the main floor and outside views.

So, yea, a long, packed trip to Paris. We marched back and forth across the city, averaging 16 kilometers a day. We rode the Metro nearly every day; using the old school little blue tickets and enjoying the, um, unique, smell of the Paris Metro while navigating the maze-like passages and stairways and braving the overly aggressive doors on the older trains. We ate fresh baguettes and crescents from boulangeries (I will fight you for the last baguette from Maison d’Isabellein Place Maubert!); Comte cheese and yogurt, raspberries and apples for from the markets for breakfast. We wondered the Latin Quarter and Montmatre. I love Paris.

You can see the full Paris, France, November-December 2022 [flickr.com] photoset on Flickr.

Categories
quotes

Capable of Monstrous Acts

The most discomposing thing about people capable of monstrous acts is that they too enjoy art, they too read to their children, they too can be moved to tears by music.

Maria Popova, in Terror, Tenderness, and the Paradoxes of Human Nature: How a Marmoset Saved Leonard and Virginia Woolf’s Lives from the Nazis [themarginalian.org] on The Marginalian

Categories
quotes ranting

Computers are only capable of calculation, not judgement

[J]udgment involves choices that are guided by values. These values are acquired through the course of our life experience and are necessarily qualitative: they cannot be captured in code. Calculation, by contrast, is quantitative. It uses a technical calculus to arrive at a decision. Computers are only capable of calculation, not judgment. This is because they are not human, which is to say, they do not have a human history – they were not born to mothers, they did not have a childhood, they do not inhabit human bodies or possess a human psyche with a human unconscious – and so do not have the basis from which to form values.

Ben Tarnoff, in ‘A certain danger lurks there’: how the inventor of the first chat bot turned against AI [theguardian.com], published in The Guardian

A timely article in The Guardian about the life an work of Joseph Weizenbaum, the author of the original Eliza program. Eliza was the first AI for prominence, if not the first, and despite how simple a program Eliza was when we look back from the likes of ChatGPT, it managed to show a fundamental issue: anthropomorphization.

The quote above is part of the article’s summary of Weizenbaum’s book Computer Power and Human Reason. A key tenant of the book is Weizenbaum’s belief that humans are too fast to anthropomorphize AI, to assign human characteristics, especially intelligence to a mere program, a deterministic bit of code written to mimic intelligence. Weizenbaum’s argument is that, a program, no matter how cleaver the programmer, no matter how good the program, can never truly be human. Humans have experiences, that are qualitative and aspect that can never be learned from mere information by a computer that is quantitate in it’s very nature. You can teach a computer the definition of love or heartbreak, but a computer can never experience it. Weizenbaum argues that only a human can, and should, make judgements which necessarily require qualitative experience, while an AI can only ever make computations. He argues that because of this fundamental difference, there are things that computers should not be allowed to do. Or, as Weizenbaum puts it in the introduction to Computer Power and Human Reason, there are limits to what computers ought to be put to do.

The future that Weizenbaum feared where we outsource judgement to AI has already come to pass in some specific instances. The Guardian article links to a Brookings Institution article [Brookings.edu] on the “widespread use [of algorithmic tools] across the criminal justice system today”. The Bookings Institution describes how computer programs are used to assign risk scores to inmates up for parole, or identify likely crime locations —hello Minority Report— and, of course, the use of facial recognition. While there is still a “human in the loop” in the decision making process today its easy to imagine people, including judges or police, just trusting the machine and in-effect outsourcing their judgement to the AI rather than just the computational tasks.

The Guardian articles is worth the read, it’s more a character study of Weizenbaum and his trajectory from creating Eliza to arguing about the potential pitfalls of AI. The promise of AI, in Weizenbaum’s lifetime faded, as it became apparent that the super intelligent programs promised was out of reach. Both the public and the academic world —and those funding them, like the military-industrial complex— moved on. It’s worth revisiting Weizenbaum’s work in our new hype cycle for AI.

Reading the article reminded me of a paper I wrote in college. AI was one of the subjects I focused on as part of my computer science degree. I built neural networks, language models, and much more. I experienced first hand how easy it was to make something that, at first glance, seemed to embody some intelligence only to find that it became quickly apparent that this was an illusion and that actual intelligence was far away, and seemingly unachievable on the resources of the day. But this was the early days of the internet, more than two decades later the world is different. In the midst of my studies I wrote a paper I titled “Will I Dream” referencing the question that HAL 9000 asked at the end of 2010: Odyssey Two. It was not really a technical paper, more a historical discussion on the dream of achieving “human like intelligence” in a program. I covered ELIZA and several of her descendent, PARRY, the paranoid program, SAM, and others.

I don’t have an electronic copy of the paper anymore, sadly it was on a hard drive that died long ago without a backup. I do have a single print out of it. Maybe I’ll transcribe it and post it here, as outdated as it is.

Looking back at my paper I’m reminded how long the AI journey has been, how the use cases that I covered in 2000, already old then are new again. A good example is FRUMP, a program written in the late 1970’s to read and summarize news stories. Similar to how GPT can be used today to summarize a website. The old goals are the new goals and the hype is reaching a fever pitch again. Will we be disappointed again? Will AI change the world in some fundamental way or will it’s promise fade? Will it be a tool but never the craftsman? Or will it take overall our jobs? Is AI an existential threat or just an amusement, a distraction from the actual existential threats we refuse to face? Meta released a music making AI recently, maybe it can generate a soundtrack for us while the world burns.

Categories
albums

The Lillywhite Sessions

Artist
Dave Matthews Band
Realse Date
March 2001 (unofficially leaked)

The Lillywhite Sessions is my second favorite album [confusion.cc] by The Dave Matthews Band, my hometown band, and the second bootleg [confusion.cc] or unreleased album on this list. You can read the history of the album on Wikipedia [wikipedia.org]. I got my copy very early, I want to say even before it was released on Napster per the Wikipedia article, but I can’t recall for sure. I got access to the songs via a very short chain of people going back to the actual recording sessions in Charlottesville. I downloaded the songs from an FTP address a friend gave me and burned them to a CD.

I fell in love with this album immediately upon popping the CD into a player. The darker tones of many of the songs is what I love. This album is particularly heavy with the strings and horns that set DMB apart from most rock bands. LeRoi Moore brings an almost jazzy feeling to several songs. DMB has always been a jam band, their live shows filled with jazz-like improvisation —songs that are 5 minutes long on an album blossoming into epic 20 minute jams in a live setting— and this comes through even in the studios setting on The Lillywhite Sessions.

Many of the songs on The Lillywhite Sessions appeared a couple of years later on an official release, Busted Stuff. But, the polished versions don’t have the same power as the unmastered raw recordings of the original leaked sessions. The rawness works with the moody nature of the songs. And, anyway, a few songs never made it to official studio albums. To this day “Monkey Man” is unreleased.

I can’t provide an Apple or Spotify playlist, but someone posted the full album to YouTube:

Categories
quotes ranting

AI and Digital Mad Cow Disease

Machine entities who have as much, or more, intelligence as human beings and who have the same emotional potentialities in their personalities as human beings… Most computer theorists believe that once you have a computer which is more intelligent than man and capable of learning by experience, it’s inevitable that it will develop an equivalent range of emotional reactions — fear, love, hate, envy, etc.”

Stanley Kubrick, quoted in Stanley Kubrik’s astute prediction regarding the future of AI [faroutmagazine.co.uk] published on Far Out magazine

If Stanley Kubrick was “right” about AI… and they are referencing 2001 (not A.I. Artificial Intelligence), then please let’s stop AI now. It didn’t end well for Dave or Frank or anyone else on the Discovery.

But I think we are a long way from AIs “learning by experience”. Today there is a iterative process to training the current batch of AIs, but it’s humans who are still learning what needs to be improved and making adjustments in the system based on the experience. It is, in fact, what and how they learn that I think is going to be the problem. Maybe in fixing that we will make a true AI like all those AIs in scifi, one that can truly learn from experiance, and then it may well be that it goes down dark path just like HAL. Like Ultron, like Skynet and many more.

Today, we do seem to be running full steam into a risk of a different kind; not a true thinking machine that decides to destroy us, but a downward spiral where we are undone by AIs that are not actually intelligent, but are tricking us with what amounts to mind reading parlor tricks turned up to 11 with massive amounts of computing power. The AIs that are in the news are just menatlists predicting that you are thinking of a gray elephant from Denmark. They make the decision not based on carefully constructed series of questions that will lead almost everyone to think of a gray elephant from Denmark, but it’s the same principle, the same tool set; statistics, that allows them to answer any question you ask. They only decide based on thier statistics what the next letter, or word, or phrase —the next ‘token’— of the output should. And of couse the statistics are very complex, ‘h’ does not always follow ‘t’ and then ‘e’ sometimes, given the context of the input and where the AI is in the output ‘e’, then ‘x’ and then ‘t’ could follow ‘t’.

It’s how the statistics the AI relies on are created and how we are starting to use these AIs that creates the issue, the issue of feedback loops. The current batch of headline stealing AIs, based on Large Language Models (LLMs) are trained on content scraped from the internet. The same way that Google or Bing scrape the internet and buld ‘indexes’ that allow you to find the particular needle you are looking for. By sucking down all the text they can from the internet and processing it to build their indexes, search enginers can quickly match your input in the search box and give you the results. Over the years much work has gone into adding things to the ‘algorithm’ that creats the index and processes your input to make the reslts better. Fixing spelling mistakes and typos, looking for related terms, raking results based on how many other sites link to the result, etc. AI is doing a similar thing, taking your input and returning results, but rather than returing a web page, the AIs are constructing ‘new’ output based on the statistics they calcualted from their web scraping. The fatal feedback will come as people —and companies— start to use AIs to generate the very content that makes up the internet. The AIs will start to eat themselve and their kid. AI canibalism.

People already have a massive issue with identifying correct information on the internet. Social media is a dumpster fire of self proclaimed experts who are, at best misguided and delusional, and at worst deceitful and nefarious. LLMs trained on this poor quality data may learn internet colloquial syntax and vocabulary, they may be able to speak well and sound like they know what they are talking about, but they are not learning any other subject. They are not able to understand right from wrong, incorrect from correct, they didn’t study medicine or history, only the structure of langauge on the internet. The vast size of the models and the volume of training data and the clever tricks of the researchers and developers impress us, but it’s just a better mentalist. LLMs have only a statistical reasoning of what to say not any understanding, or knowledge of why it should be said. Ironically what the AIs actually lack is intelligence.

This is quickly becoming a problem as people and companies embrace LLMs to generate content faster a cheaper, to drive traffic to their websites and earn advertising money. Without human experts in the loop to review and revise the AIs output you end up with hallucinations, or “a confident response by an AI that does not seem to be justified by it s training data,” as Wikipedia [wikipedia.org] explains. Wikipedia also gives an example: “a hallucinating chatbot might, when asked to generate a financial report for Tesla, falsely state that Tesla’s revenue was $13.6 billion (or some other random number apparently “plucked from thin air”).”

Again, the problem is that the LLM lacks any knowledge or understanding of the subject, it can’t analyze the question or it’s own answer except from the point of view that, statistically based on its training data it should output the tokens “13.6 billion” in this situation. It’s amazing how much they do get right, how lucid they seem. This is down to their size and complexity. But if people blindly accept the AIs output and post these hallucinations to the internet —even to point out they are wrong as I’m doing— then the next batch of training data will be polluted with yet more inaccurate data and over time the whole model may start to be overwhelmed by some sort of delirium, a digital mad cow disease.

Mad Cow Disease [wikipedia.org] was caused by caused by (or really spread by) feeding cows the remains of their own dead to save money, to reuse the parts of the cow with no other use, it was ground it up and added it to cow feed. This allowed, it seems, a random harmful mutation in a prion, maybe only in a single cow, to be ingested by more cows who were, in turn, ground up and the cycle repeated and the disease spread. Now the LLMs will be fed their own AIs output and the output of competing AIs in the next training cycle. So a hallucination from one AI makes it’s way, like a defective prion, into the model of another AI to be regurgitate, and the cycle repeats allowing this digital mad cow disease to spread. And like real world mad cow disease humans who digest this infected content may come down with a digital analog of Variant Creutzfeldt–Jakob Disease (vCJD) [wikipedia.org]. Let’s hope digital vCJD is not irreverable and 100% fatal like it’s namesake.

Maybe the people who design these systems will figure out how to inject some sort of fact checking and guidelines. But who decides what us write when we can’t even agree on facts? The internet is filled with bigots and conspiracy theorists, well meaning idiots and scam artists, trolls and shills. How will AIs be any different? Maybe we will find a way to regulate things so AIs can’t be fed their own verbal diarrhea but who decides what is right and wrong? Who decides what is morally or socially acceptable?

This post is a good example of the problem, should LLMs use it as training data, it’s pure opinion by someone unqualified, I studied the principles of AI and built neural networks back in college, but I’m not an expert, the content of this post could be useful to an AI answering a question about the perception of the problem among the general public or their concerns, but it should not be used as factual or confused with expert opinion, it should not be used to answer a question about “what are the risks of training AIs on data scraped from the internet”. How does and AI trained on the internet know the difference? We seem to have jumped out of the plane without checking if we packed the parachute.

One note on the article where I got the quote: Intake issue with the fact that it talks about the all effort the Kubrick put in to make 2001 as accurate as possible, speaking with experts for “countless hours”, but it fails to mention that the 2001 was co-written with an actual scientist – Arthur C. Clark.